Since Boltron is out and the team is working hard on Fedora 27 modular server, we could start talking more about what it all means. One of the aspects our team is interested in is utilizing modules in container images. If the world of Modularity is still new to you, don’t worry, we have you covered! I advise you to read the Boltron announcement and dive deep into Fedora Modularity website. As for containers, they keep being great at packaging applications with its dependencies and basic runtime environment in a single “box” which is easy to operate.
You might be wondering why anyone would include modules inside container images. There are some very practical reasons that make modules a good fit for containers.
1. You know what you are getting.
With traditional distribution, it’s hard to predict the version of software being installed. Let’s do an example. Do you know what version of nodejs you’ll get when you install it inside
$ dnf install nodejs
So how will modules help here? Let’s proceed to the next point.
2. You can pick the version you want.
Modules have a concept of a stream. Streams are defined by packagers where they clearly state what the compatibility status of the stream is. Some may be strict which will require all updates within the stream to be API and ABI compatible, others may just provide latest upstream version with no guarantees. The strict streams are usually tied to upstream major releases. When installing a module, you can select a stream to be installed. In the example below, I chose stream 8 for nodejs, which is tied to upstream version 8.
$ dnf install @nodejs:8 ====================================================== Package Arch Version ====================================================== Installing group packages: nodejs x86_64 1:8.0.0-1.module_42d8f2a0
Since the stream guarantees compatibility, updating nodejs package within that stream ensures that you will always be getting version 8.
This will also solve one of the problems we have when building container images in Fedora. There is a guideline that you should not set version label, because no one knows what version of the package is going to be installed. We concluded that it’s better to ignore the label rather than setting it wrong. With modules, we’ll select a precise stream and we’ll be sure to get a version we picked.
3. Compatibility under control.
Let’s get back to the first point.
What if your software stack works only with NodeJS 6 and the command
would install version 8. That would be a problem, right?
With streams, you will be able to select one which guarantees that you will always get an API/ABI compatible packages within that stream. If you need stability, pick a stream with strict compatibility guarantees — no surprises when installing packages anymore. So predictable, such happiness!
4. A module may come optimized for the container use case.
Back in the day we defined an install profile named “container”. Install profile contains a set of packages which are meant to be installed. The “container” install profile should contain a list of RPM packages which are meant to be installed into containers.
So in dnf, all you need to do is just:
$ dnf install @nodejs:8/container
and you should be getting the right packages. No need to spend time figuring out what those should be. And obviously, you can always pick the ones you really need.
5. The installation is simple and your muscle memory works.
Modules don’t need a new package manager. You can easily install them using dnf. Optionally, if you want better experience, you can use the functionality supporting modularity which was added into dnf.
6. In the end, you are getting ordinary RPMs.
Modularity is not a new packaging technology. When you’re installing modules, in reality, you are getting RPMs. So, don’t fear that you need to learn a new packaging format.
This is my favorite one. Once the Factory 2 team reaches their goals, we’ll have a complete build pipeline in-place. This means that to build an updated container-image, all you need to do, is to update an RPM spec file, commit the changes and push them into Fedora dist-git. You heard me.
The pipeline should then trigger the build of the RPM, which should trigger a build of the associated modules, which should trigger a build of container images which utilize the modules. On top of it, every step will be gated by CI.
We’re not quite there yet, but we’re right on track.
With this kind of automation, rebuilding container images for sake of fixing CVEs should be much easier. Benefit for users is to get updated container images within hours.
8. Rebuilds that work.
Imagine this: there is a new CVE with a fancy name. Fedora rebuilds all container images to pick up the fix from a base image based on the automation work described in previous point. And boom! A bunch packages were updated which turned into breaking functionality of the particular containers. That’s not we wanted, right?
This scenario should not happen with modules. As I explained when talking about streams, when you select a stream with strict compatibility guarantees, you should be getting functional, compatible packages forever.
9. The simplicity.
If you ever played with Software Collections, or even tried to create a container image based on an existing collection, you might got to a point when it was too complicated:
- RPMs installed into a separate tree in
- the need to
- and for sure, a bunch of hacks
Our goal is to make those pain points history. Why? Modules are installed directly in your root filesystem, you don’t need dedicated tooling to use the modules and… no hacks!
Oh, but your requirement is to have three database versions installed in parallel — no problem. Just get a container image for every version and you are good to go. Modularity doesn’t provide a solution for parallel installation, it provides multiple versions for a single package.
Are you so hyped for modules as I am?