I think we all have heard that Docker is the hottest thing since sliced bread (or maybe even “since bread was wrapped” 🙂 ). Docker allows you to quickly and easily spin up a new “something” by describing it as a Dockerfile, including dependencies, configuration, etc. and then building it as a binary image which is, fairly, portable. You can also base your application on other docker images which allows you to “just” worry about your part of the application stack. Once you have the Dockerfiles and the images, then you can share them with your friends (or the world 🙂 ). There are also plenty of potential challenges (some I see) with leveraging the docker model of distribution, like bundling and a lack of understanding of the application stack.
However, this article isn’t really about docker per se, and there are lots of good “getting started guides” out there. I really wanted to call to light a particular project “Fedora-Dockerfiles” (and ‘yum install fedora-dockerfiles’) which I have found to be super handy. Being someone who plays around with lots of different tech on a regular basis, docker images are really handy to just “try something out.” What I also find useful is lots and lots of example Dockerfiles to base my own work on.
Originally, started (I believe) by Scott Collier who then took the stuff he was building and created a “home” for it in Fedora. There have also been a bunch of other contributors since. They also have a convenient “needs_work” directory 🙂 for stuff in progress, which is where all my stuff currently is :/ .
So, in closing, “Thanks Scott” for getting this started and “Thanks to all y’all” who have been adding new Dockerfiles. I am sure the folks would love feedback, so, if something is missing, feel free to suggest it or make one yourself.
I’ve done the Docker tutorial, but I still don’t quite get it. That github repo has Firefox and SystemD. Why do I want a docker with those? Mysqld, httpd, etc – those I understand….kind of.
In which circumstances should I be using VMs – one each for mysql server and http server and in which circumstances containers? Because I know the nice thing about the VMs is that updating your mysql VM doesn’t disrupt your http VM.
OK, you ask some VERY hard questions, so I think you may “get it” more than you think :).
Why Firefox? Well, there is a lot of movement around increasing the sandboxing of desktop applications from each other (e.g. Wayland does some of this, GNOME is considering it). Check out Christian’s post.
Why SystemD? Well, this is really not so much about systemd per se but about “how can you get some of the general system services in a container? e.g. auto-start/stop, logging, etc” and marry them to the host OS. One also would consider something like systemd if you wanted to run multiple services in a container, which, is normally not advised, but, may be necessary for some use cases. Systemd just happens to be that provider in Fedora.
VMs vs Containers? Well, this is also very tough and probably not perfectly answered anywhere. However, I think a search would reveal a bunch of good articles on the subject. I think I would advise, if you need “application separation” and that is about it, containers are probably a great choice, if you need more complex aspects of “different machines” e.g. different kernels, you need to use VMs. Containers also really encourage a MTTR-style of application design, so for existing apps the architecture may not be appropriate. This is probably a ludicrously simplistic explanation, but, hopefully, it helps. I would also recommend reading the Dr. Dobb’s article I wrote and mentioned above for more.
OK, I read your Dr Dobbs article. So if I understand correctly, you’re saying that in many cases it isn’t either/or it’s complimentary. So right now in my house I have a PogoPlug running DNS, Samba/NFS, and Mysqld. At the time only Arch ran on those, but the darn thing falls over every time I do an upgrade. So I’ve been daydreaming of building a computer that would house a half dozen VMs to run each of those (plus some more things) with CentOS being the host OS. But you’re saying that this would be tons of VM overhead (requiring CPU load and RAM) for running lots of CentOS or Fedora instances for no reason. What I should do instead is run a bunch of Docker Containers with each of those programs inside it.
At that point the only issues are: a) updating the CentOS host (or maybe there are no VMs in this case, just the one CentOS ) with a reboot for kernel/security fixes means we lose all services rather than one at a time b) you mentioned that updating the Containers is hard/impossible? So maybe I’d want a host CentOS with two VMs – one for updating/making new containers and one for running the updated ones?
I would lean towards the latter of your choices. Oh, and if you get that resilient for your *home* network, would you mind coming over and setting up mine too?!?! 🙂
You know, I enjoy it so much-if I could make a job of providing these these services to computer guys who just don’t have the time – it’d be the perfect job.