How do y’all manage all these Docker compose apps?
First I installed Jellyfin natively on Debian, which was nice because everything just worked with the normal package manager and systemd.
Then, Navidrome wasn’t in the repos, but it’s a simple Go binary and provides a systemd unit file, so that was not so bad just downloading a new binary every now and then.
Then… Immich came… and forced me to use Docker compose… :|
Now I’m looking at Frigate… and it also requires Docker compose… :|
Looking through the docs, looks like Jellyfin, Navidrome, Immich, and Frigate all require/support Docker compose…
At this point, I’m wondering if I should switch everything to Docker compose so I can keep everything straight.
But, how do folks manage this mess? Is there an analogue to apt update
, apt upgrade
, systemctl restart
, journalctl
for all these Docker compose apps? Or do I have to individually manage each app? I guess I could write a bash script… but… is this what other people do?
Docker is far cleaner than native installs once you get used to it. Yes native installs are nice at first, but they aren’t portable, and unless the software is built specifically for the distro you’re running you will very quickly run into dependency hell trying to set up your system to support multiple services that all want different versions of libraries. Plus what if you want or need to move a service to another system, or restore a single service from a backup? Reinstalling a service from scratch and migrating over the libraries and config files in all of their separate locations can be a PITA.
It’s pretty much a requirement to start spinning up separate VMs for each service to get them to not interfere with each other and to allow backup and migration to other hosts, and managing 50 different VMs is much more involved and resource-intensive than managing 50 different containers on one machine.
Also you said that native installs just need an apt update && apt upgrade, but that’s not true. Services that are built into your package manager sure, but most services do not have pre-built packages for all distros. For the vast majority, you have to git clone the source, then build from scratch and install. Updating those services is not a simple apt update && apt upgrade, you have to cd into the repo, git pull, then recompile and reinstall, and pray to god that the dependencies haven’t changed.
docker compose pull/up/down is pretty much all you need, wrap it in a small shell script and you can bring up/down or update every service with a single command. Also if you use bind mounts and place them in the directory for the service along side the compose file, now your entire service is self-contained in one directory. To back it up you just “docker compose down”, rsync the directory to the backup location, then “docker compose up”. To restore you do the exact same thing, just reverse the direction of the rsync. To move a service to a different host, you do the exact same thing, just the rsync and docker compose up are now being run on another system.
Docker lets you compact an entire service with all of its dependencies, databases, config files, and data, into a single directory that can be backed up and/or moved to any other system with nothing more than a “down”, “copy”, and “up”, with zero interference with other services running on your system.
I have 158 containers running on my systems at home. With some wrapper scripts, management is trivial. The thought of trying to manage native installs on over a hundred individual VMs is frightening. The thought of trying to manage this setup with native installs on one machine, if that was even possible, is even more frightening.
I didn’t see ansible as a solution here, which I use. I run docker compose only. Each environment is backed up nightly and monitored. If a docker compose pull/up and then image clean breaks a service, I restore from a backup that works and see what went wrong.
Each app has a folder and then I have a bash script that runs
Docker compose up -d
In each folder of my containers to update them. It is crude and will break something at some stage but meh jellyseer or TickDone being offline for a bit is fine while I debug.
Don’t auto update. Read the release notes before you update things. Sometimes you have to do some things manually to keep from breaking things.
Pretty much guaranteed you’ll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to read the release notes and manually update that one, but not everything.
Politically correct of course.
But from my own experience using Watchtower for over 7 years is that I can count on one hand when it actually broke something. Most of the time it was database related.
But you can put apps on the watchtower ignore list (looking a you Immich!), which clear that out fairly quick.
And if you roll all your dockers on ZFS as datasets + sanoid you can just rollback to the last snapshot, if that ever does happen.
I have finally had to switch to using docker for several things I use to just install manually (ttrss being the main one). It sure feels dirty when i use to just apt update and know everything was updated.
I can see the draw for docker but feel it’s way over used right now.
Just replace Apt update with docker pull 🤷♂️
Yeah, I have everything as compose.yaml stacks and those stacks + their config files are in a git repo.
It’s really nice once it’s going, especially if you link them together in a compose and farm out all the individual ymls for each service, or use something like dockage to do it.
But, how do folks manage this mess?
I generally find it less of a mess to have everything encapsulated in docker deployments for my server setups. Each application has its own environment (i.e. I can treat each container as its own ‘Linux machine’ which has only the stuff installed that’s important) and they can all be interfaced with through the same cli.
Is there an analogue to
apt update
,apt upgrade
,systemctl restart
,journalctl
?Strictly speaking
docker pull <image>
,docker compose up
,docker restart <container>
, anddocker logs <container>
. But instead of finding direct equivalents to a package manager or system service supervisor, i would suggest reading up on- the docker command line, with its simple
docker run
command and the (in the beginning) super importantdocker ps
- The concept of Dockerfiles and what exactly they encapsulate - this will really help understand how docker abstracts from single app messiness
- docker-compose to find the equivalent of service supervision in the container space
Applications like immich are multi-item setups which can be made much easier while maintaining flexibility with docker-compose. In this scenario you switch from worrying about updating individual packages, and instead manage ‘compose files’, i.e. clumps of programs that work together to provide a specific service.
Once you grok the way compose files make that management easier - since they provide the same isolation and management regardless of any outer environment, you have a plethora of tools that make manual maintenance easy (dockge, portainer,…) or, more often, make manual maintenance less necessary through automation (watchtower, ansible, komodo,…).
I realise this can be daunting in the beginning but it is the exact use case for never having to think about downloading a new Go binary and setting up a manual unit file again.
- the docker command line, with its simple
docker compose CLI.
KISS, never did me wrong.
Watchtower for automated updates. For containers that don’t have a latest tag to track, editing the version number manually and then
docker compose pull && docker compose up -d
is simple enough.Adding here. Most docker containers support semver pinning! It’s a great balance between automated updates and advoiding breakage.
I manage them with dokploy.com
I update them manually after checking if the update is beneficial to me.
If not then why touch a running system?
I use Dockge to manage everything.
Wow thank you for this. This looks so much nicer than portainer.
Subscribing to these communities is so helpful because of discovery like this.
Same here. Dockge is also developed by the Watchtower dev.
It’s so much easier to use than Portainer: no weird licensing shit, uses standard Docker locations, and works even with existing stacks. Also helps me keep Docker stacks organized - each
compose.yaml
lives in it’s own folder under/opt/stacks/
.I have 4 VMs on my cluster specifically for Docker, each with it’s own Dockge instance, which can be linked together so that any Dockge instance in my cluster can access all Docker stacks over all the VMs.
+1 for Dockge.
I run Akkoma, Navidrome, Searx, valutwarden, RomM, Forgejo, wireguard, RDP, and a few other things all via docker. Honestly I just keep everything in their own dir and just have Yazi on my server to make it easier to manage. I don’t auto update anything, it’s all manual updates.
I’m probably going to slap Watchtower in there to just make things easier. don’t really need to over think it in all honesty.
I just use watchtower to update automatically.
Docker has a logs command.
And being able to opt in just with a container label is super convenient
Check out Dockge. It provides a simple yet very usable and useful web UI for managing Docker compose stacks.
Was looking if anyone mentioned it!
I started with portainer but it was way too complex for my small setup. Dockge works super well, starting, stopping, updating containers in a simple web interface.
Just updating Dockge itself from inside Dockge does not seem to work but to be fair I didn’t look into it that much yet.
Can Dockge manage/cleanup unused images and containers by now? That’s the only reason I keep using Portainer - because it can show all the other stuff and lets me free up space.
No, not through the dockge UI. You can do it manually with standard docker commands (I have a cron task for this) but if you want to visualize things, dockge won’t do that (yet?).