Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.
search this on The internet I am still Not very clear what it does.
Could someone explain this to me like im stupid? What does it do and why would I need it?
Also what are other services that might be interesting to self host in The future?
Many thanks!
EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!
A program isn’t just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.
So instead of having problems getting the fucking program to run, you have problems getting docker to properly build/run when you need it to.
At work, I have one program that fails to build an image because of a 3rd party package who forgot to update their pgp signature; one that builds and runs, but for some reason gives a 404 error when I try to access it on localhost; one that whoever the fuck made it literally never ran it, because the
Dockerfile
was missing some 7 packages in the apt install line.Building from source is always going to come with complications. That’s why most people don’t do it. A docker compose file that ‘just’ downloads the stable release from a repo and starts running is dramatically more simple than cross-referencing all your services to make sure there are no dependency conflicts.
There’s an added layer of complexity under the hood to simplify the common use case.
Docker is not a virtual machine, it’s a fancy wrapper around chroot
I’m aware of that, but OP requested “explain like I’m stupid” so I omitted that detail.
Isn’t all of this a complete waste of computer resources?
I’ve never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I’m a bit afraid.
Edit: thanks for downvoting an honest question. Wtf.
The main “wasted” resources here is storage space and maybe a bit of RAM, actual runtime overhead is very limited. It turns out, storage and RAM are some of the cheapest resources on a machine, and you probably won’t notice the extra storage or RAM usage.
VMs are heavy, Docker containers are very light. You get most of the benefits of a VM with containers, without paying as high of a resource cost.
If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.
On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system’s libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.
Don’t be afraid of it, it’s like Lego but for software.
No because docker is not actually a VM
But why can I “just install a program” on my windows machine or on my phone and it is that easy?
You might notice that your Windows installation is like 30 gigabytes and there is a huge folder somewhere in the system path called WinSXS. Microsoft bends over backwards to provide you with basically all the versions of all the shared libs ever, resulting in a system that can run programs compiled from decades ago just fine.
In Linux-land usually we just recompile all of the software from source. Sometimes it breaks because Glibc changed something. Or sometimes it breaks because (extremely rare) the kernel broke something. Linus considers breaking the userspace API one of the biggest no-nos in kernel development.
Even so, depending on what you’re doing you can have a really old binary run on your Linux computer if the conditions are right. Windows just makes that surface area of “conditions being right” much larger.
As for your phone, all the apps that get built and run for it must target some kind of specific API version (the amount of stuff you’re allowed to do is much more constrained). Android and iOS both basically provide compatibility for that stuff in a similar way that Windows does, but the story is much less chaotic than on Linux and Windows (and even macOS) where your phone app is not allowed to do that much, by comparison.
In Linux-land usually we just recompile all of the software from source
That’s just incorrect. Apart from 3 guys who have no better things to do no one in “Linux-land” does that.
Please don’t call yourself stupid. The common internet slang for that is ELI5 or “explain [it] like I’m 5 [years old]”.
I’ll also try to explain it:
Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
It’s called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.You then need the software docker (or podman, etc.) to run this container.
Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
This way the software can’t destroy your system and you can’t accidentally destroy the software inside the container.It’s basically like a tiny virtual machine running locally.
I know it’s ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.
For example, containers are disposable cattle. You don’t backup containers. You backup volumes and configuration, but not containers.
Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).
For self hosting maybe the difference doesn’t matter much, but there is a difference.
A million times this. A major difference between the way most vms are run and most containers are run is:
VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem
You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space
Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.