I hosted multiple services on a Ubuntu server VM. I recently replaced it with a Proxmox server and I’m having troubles with finding a good organization.

I have a lot of apps running as docker compose projects, but I want something more unified that I can deploy with Ansible for example.

But for this I need to separate my actual data correctly.

So first question : What would be the best way for me to manage the data of my many services ? (I currently have 2 4TB drives and a 500Gb I will replace soon)

I found people online using a TrueNAS VM and using NFS on all other VMs.

I could also just add drives in Proxmox using LVM (it is currently set-up that way, but I would need a boot/services/configuration drive and a data drive for each VM)

Then, I need to make these services accessible from outside.

I am currently using nginx proxy manager, however I also wanted a more automatic and statically configurable solution like Traefik or Caddy. But I also want to keep the reverse proxy on a separate VM so I am not sure of the best way to link the VM with docker containers to it, since there are multiple services on it.

I thought about kubernetes or docker swarm, but I am thinking it would be more trouble that it’s worth when I could run the reverse proxy on the same docker host as the services.

I also plan on adding SSO later on.

So second question : What kind of reverse proxy setup would be best here ?

Here are the services I host :

  • Jellyfin (as docker right now, but soon in a dedicated VM)
  • *arr suite (docker now, soon docker in Jellyfin VM)
  • Bitwarden (docker in main docker VM)
  • Paperless-ngx (docker in main docker VM)
  • Portainer (docker in main docker VM)
  • Home assistant (as a VM)
  • MonicaHq (docker in main docker VM)
  • nginx proxy manager (docker in main docker VM)
  • tehnomad@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I use a ZFS pool for my data and a combination of LXCs and Docker in an LXC to run my services. Proxmox is flexible enough that you can pretty much get any configuration to work. I even have my Intel iGPU passthrough set up to my Docker LXC to my jellyfin docker container. Caddy and Authelia are on one LXC for reverse proxy and authentication, and I can point it to my Docker LXC by its IP address. I use bind mounts to mount folders on my ZFS pool to the LXCs/Docker.

    One advantage of Caddy running in Docker is you can use the caddy-docker-proxy module to automatically generate a Caddyfile from Docker labels of your containers.

    I started my ZFS pool from scratch with new hard drives. If you want to reuse your existing ones without wiping your data, you may want to look in to MergerFS.