Containers are a popular way to distribute and run software on Linux. One of the tools included in Fedora Linux to work with containers is the Pod Manager tool, also known as Podman. This article describes the use of the Ansible Podman Linux System Roles to automate container management.
With Podman, you can quickly and easily download container images and run containers. For more information on Podman, check out the Getting Started section on the podman.io site.
While Podman is very easy to use, many people are interested in automating Podman for a variety of reasons. For example, maybe you have multiple Fedora Linux systems that you would like to deploy a container workload across, or perhaps you’re a developer and would like to setup an automated process to deploy containers on your local workstation for testing purposes. Whether you are working with containers on a single system, or need to manage containers across a number of systems, automation can be critical to being efficient and saving time.
Overview of Linux System Roles
Linux System Roles are a set of Ansible roles/collections that can help automate the configuration and management of several aspects of Fedora Linux, CentOS Stream, RHEL, and RHEL derivatives. Linux System Roles is packaged in Fedora as an RPM (linux-system-roles) and is also available on Ansible Galaxy. For more information on Linux System Roles, and to see a list of included roles, refer to the Linux System Roles project page.
Linux System Roles recently added a new podman role for automating the management of Podman containers. One of Podman’s unique features is that it is daemonless, so the podman role directly sets the desired configuration on each host, and is capable of configuring the containers.conf, containers-registries.conf, containers-storage.conf, and containers-policy.json settings.
Podman systemd integration and Kubernetes YAML support
The podman system role utilizes the systemd integration with Kubernetes YAML introduced in Podman version 4.2. Podman supports the ability to run containers based on Kubernetes YAML, which can make it easier to transition between Podman and Kubernetes. Podman 4.2 introduced a new podman-kube@.service which uses systemd to manage containers defined in Kubernetes YAML. You’ll see an example of how the podman system role utilizes this functionality below.
Demo environment overview
In my environment I have four systems running Fedora Linux. The fedora-controlnode.example.com system will be the Ansible control node — this is where I’ll install Ansible and Linux System Roles. The other three systems, fedora-node1.example.com, fedora-node2.example.com, and fedora3-node3.example.com are the systems that I would like to deploy container workloads on to.
On these three systems, I would like to deploy a Nextcloud container. I would also like to deploy a web server container on these systems and run this as a non-privileged user (also referred to as a rootless container). I’ll use the httpd-24 container image that is a Red Hat Universal Base Image (UBI).
Setting up the control node system
Starting on the fedora-controlnode.example.com system, I’ll need to install the linux-system-roles and ansible packages:
[ansible@fedora-controlnode ~]$ sudo dnf install linux-system-roles ansible
I’ll also need to configure SSH keys and the sudo configuration so that a user on the fedora-controlnode.example.com host can authenticate and escalate to root privileges on each of the three managed nodes. In this example, I am using an account named ansible.
Defining the Kubernetes YAML for the Nextcloud container
I’ll create a Kubernetes YAML file named nextcloud.yml with the following content that defines how I want the Nextcloud container configured:
apiVersion: v1 kind: Pod metadata: name: nextcloud spec: containers: - name: nextcloud image: docker.io/library/nextcloud ports: - containerPort: 80 hostPort: 8000 volumeMounts: - mountPath: /var/www/html:Z name: nextcloud-html volumes: - name: nextcloud-html hostPath: path: /nextcloud-html
The key parts of this YAML specify:
- the name of the container,
- the URL for the container image,
- that the container’s port 80 will be published on the host as port 8000,
- that the /var/www/html directory should use a volume mount using the /nextcloud-html directory on the host.
Defining the Kubernetes YAML for the web server
I’d also like to deploy a container running a web server, so I’ll define the following Kubernetes YAML file for it, named ubi8-httpd.yml:
apiVersion: v1 kind: Pod metadata: name: ubi8-httpd spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html hostPath: path: ubi8-html
This is similar to the nextcloud.yml file:
- specifying the name of the container,
- the URL for the container image,
- that the container’s port 8080 should be published on the host as port 8080,
- that the /var/www/html directory should use a volume mount using the ubi8-html directory on the host.
Note that later on we’ll configure this container to run as a non-privileged user, so this path will be relative to the user’s home directory.
Defining the Ansible inventory file
I need to define a Ansible inventory file that lists the host names of the systems I would like to deploy the containers on. I’ll create a simple inventory file, named inventory, with the list of my three managed nodes:
fedora-node1.example.com fedora-node2.example.com fedora-node3.example.com
Defining the Ansible playbook
The final file I need to create is the actual Ansible playbook file, which I’ll name podman.yml with the following content:
- name: Run the podman system role hosts: all vars: podman_firewall: - port: 8080/tcp state: enabled - port: 8000/tcp state: enabled podman_create_host_directories: true podman_host_directories: "ubi8-html": owner: ansible group: ansible mode: "0755" podman_kube_specs: - state: started run_as_user: ansible run_as_group: ansible kube_file_src: ubi8-httpd.yml - state: started kube_file_src: nextcloud.yml roles: - fedora.linux_system_roles.podman - name: Create index.html file hosts: all tasks: - ansible.builtin.copy: content: "Hello from {{ ansible_hostname }}" dest: /home/ansible/ubi8-html/index.html owner: ansible group: ansible mode: 0644 serole: object_r setype: container_file_t seuser: system_u
This playbook contains two plays, the first is named Run the podman system role. This play defines variables that control the podman system role, which is called as part of this play. The variables defined are:
- podman_firewall: specifies that port 8080/tcp and 8000/tcp should be enabled. These ports are used by the ubi8-httpd and nextcloud containers, respectively.
- podman_create_host_directories: specifies that host directories defined in the Kubernetes files will be created if they don’t exist
- podman_host_directories: Within the ubi8-httpd.html Kubernetes YAML file, I defined a ubi8-html volume. This variable specifies that this ubi8-html directory on the hosts will be created with the ansible owner and group, and with a 0755 mode. Note that the nextcloud-html volume, defined in the nextcloud.yml file, is not listed here so the default ownership and permissions will be used when the directory is created on the hosts.
- podman_kube_specs: This lists the Kubernetes YAML files that the podman system role should manage. It refers to the two files that were previously explained, ubi8-httpd.yml, and nextcloud.yml . Note that for the ubi8-httpd.yml container, it is also specified that this should be run as the ansible user and group.
The second play, Create index.html file, uses the ansible.builtin.copy module to deploy a index.html file to the /home/ansible/ubi8-html/ directory. This will provide the web server running from the ubi8-html containers content to serve.
Running the playbook
The next step is to run the playbook from the fedora-controlnode.example.com host with the following command:
[ansible@fedora-controlnode ~]$ ansible-playbook -i inventory -b podman.yml
I’ll verify that the playbook completes successfully with no failed tasks:
At this point, the nextcloud and ubi8-html containers should be deployed on each of the three managed nodes.
Validating the Nextcloud containers
Now, I’ll validate the successful deployment of the nextcloud containers on the three managed nodes. I can validate that Nextcloud is accessible by connecting to each host on port 8000 using a web browser, which shows the Nextcloud configuration screen on each host:
I’ll further investigate the fedora-node1.example.com host by connecting to it over SSH and using sudo to access a root shell:
[ansible@fedora-controlnode ~]$ ssh fedora-node1.example.com [ansible@fedora-node1 ~]$ sudo su - [root@fedora-node1 ~]#
Run podman ps to validate that the nextcloud container is running:
[root@fedora-node1 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b6b131a652d localhost/podman-pause:4.2.1-1662580699 4 minutes ago Up 4 minutes ago 0aa0edcf4b08-service 71a2a1a48232 localhost/podman-pause:4.2.1-1662580699 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->80/tcp 8b226e4ad5c1-infra c307a07c7cae docker.io/library/nextcloud:latest apache2-foregroun... 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->80/tcp nextcloud-nextcloud
Validate that the /nextcloud-html directory on the host has been populated with content from the container:
[root@fedora-node1 ~]# ls -al /nextcloud-html/ total 112 drwxr-xr-x. 1 33 tape 420 Nov 7 13:16 . dr-xr-xr-x. 1 root root 186 Nov 7 13:12 .. drwxr-xr-x. 1 33 tape 880 Nov 7 13:16 3rdparty drwxr-xr-x. 1 33 tape 1182 Nov 7 13:16 apps -rw-r--r--. 1 33 tape 19327 Nov 7 13:16 AUTHORS drwxr-xr-x. 1 33 tape 408 Nov 7 13:17 config -rw-r--r--. 1 33 tape 4095 Nov 7 13:16 console.php -rw-r--r--. 1 33 tape 34520 Nov 7 13:16 COPYING drwxr-xr-x. 1 33 tape 440 Nov 7 13:16 core ... ...
I can also see that a systemd unit has created for this container:
[root@fedora-node1 ~]# systemctl list-units | grep nextcloud podman-kube@-etc-containers-ansible\x2dkubernetes.d-nextcloud.yml.service loaded active running A template for running K8s workloads via podman-play-kube [root@fedora-node1 ~]# systemctl status podman-kube@-etc-containers-ansible\\x2dkubernetes.d-nextcloud.yml.service ● podman-kube@-etc-containers-ansible\x2dkubernetes.d-nextcloud.yml.service - A template for running K8s workloads via podman-play-kube Loaded: loaded (/usr/lib/systemd/system/podman-kube@.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-07 13:16:52 MST; 7min ago Docs: man:podman-play-kube(1) Main PID: 7601 (conmon) Tasks: 3 (limit: 4655) Memory: 31.1M CPU: 2.562s ... ...
Note that the name of the service is quite long because it refers to the name of the Kubernetes YAML file, /etc/containers/ansible-kubernetes.d/nextcloud.yml. This file was deployed by the podman system role. If I display the contents of the file, it matches the contents of the nextcloud.yml Kubernetes YAML file I created on the control node host.
[root@fedora-node1 ~]# cat /etc/containers/ansible-kubernetes.d/nextcloud.yml apiVersion: v1 kind: Pod metadata: name: nextcloud spec: containers: - image: docker.io/library/nextcloud name: nextcloud ports: - containerPort: 80 hostPort: 8000 volumeMounts: - mountPath: /var/www/html:Z name: nextcloud-html volumes: - hostPath: path: /nextcloud-html name: nextcloud-html
Validating the ubi8-httpd containers
I’ll also validate that the ub8-httpd container, which was deployed to run as the ansible user and group, is working properly. Back on the fedora-controlnode.example.com host, I’ll validate that I can access the web server on port 8080 on each of the three managed nodes:
[ansible@fedora-controlnode ~]$ for server in fedora-node1.example.com fedora-node2.example.com fedora-node3.example.com; do curl ${server}:8080; echo; done
Hello from fedora-node1
Hello from fedora-node2
Hello from fedora-node3
I’ll also connect to one of the managed nodes as the ansible user to further investigate:
[ansible@fedora-controlnode ~]$ ssh fedora-node1.example.com [ansible@fedora-node1 ~]$ whoami ansible
I’ll run podman ps and validate that the ubi8-httpd container is running:
[ansible@fedora-node1 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b42efd7c9c0 localhost/podman-pause:4.2.1-1662580699 20 minutes ago Up 20 minutes ago 1b46d9874ed0-service f62b9a2ef9b8 localhost/podman-pause:4.2.1-1662580699 20 minutes ago Up 20 minutes ago 0.0.0.0:8080->8080/tcp 0938dc63acfd-infra 4b3a64783aeb registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 20 minutes ago Up 20 minutes ago 0.0.0.0:8080->8080/tcp ubi8-httpd-ubi8-httpd
This container was deployed as a non-privileged user (the ansible user) so there is a systemd user instance running as the ansible user. I’ll need to specify the –user option on the systemctl command when validating that the systemd unit was created and is running:
[ansible@fedora-node1 ~]$ systemctl --user list-units | grep ubi8 podman-kube@-home-ansible-.config-containers-ansible\x2dkubernetes.d-ubi8\x2dhttpd.yml.service loaded active running A template for running K8s workloads via podman-play-kube [ansible@fedora-node1 ~]$ systemctl --user status podman-kube@-home-ansible-.config-containers-ansible\\x2dkubernetes.d-ubi8\\x2dhttpd.yml.service ● podman-kube@-home-ansible-.config-containers-ansible\x2dkubernetes.d-ubi8\x2dhttpd.yml.service - A template for running K8s workloads via podman-play-kube Loaded: loaded (/usr/lib/systemd/user/podman-kube@.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-07 13:12:31 MST; 24min ago Docs: man:podman-play-kube(1) Main PID: 5260 (conmon) Tasks: 17 (limit: 4655) Memory: 9.3M CPU: 1.245s ... ...
As previously mentioned, the systemd unit name is so long because it contains the path to the Kubernetes YAML file, which in this case is /home/ansible/.config/containers/ansible-kubernetes.d/ubi8-httpd.yml. This file was deployed by the podman system role and contains the contents of the ubi8-httpd.yml file previously configured on the fedora-controlnode.example.com host.
Validating containers automatically start at boot
I’ll reboot the three managed nodes to validate that the containers automatically start up at boot.
After the reboot, the nextcloud containers are still accessible on each host on port 8000, and the ubi8-httpd containers are accessible on each host at port 8080.
The systemd units for the nextcloud containers and ubi8-httpd containers are both enabled to start at boot. However, note that the ubi8-httpd container is running as a non-privileged user (the ansible user) , so the podman system role has automatically enabled user lingering for the ansible user. This setting enables a systemd user instance to be started at boot, and to keep running when the user logs out, so that the container will automatically start at boot.
Conclusion
The podman Linux System Role can help automate the deployment of Podman containers across your Fedora Linux environment. You can also combine the podman system role with the other Linux System Roles in the Fedora linux-system-roles package to automate even more. For example, you could write a playbook that utilizes the storage Linux System Role to configure filesystems across your environment, and then use the podman system role to deploy containers that utilize those filesystems.
Christian Sam
a well designed showcase regarding integrating kubernetes deploy-definitions with a pure ansible and podman (and w/o the need of an actual k8s cluster) setup. very much appreciated!
Drake World
Very nice and informative article. Thank you so much for sharing that with us.
BigMoney
Nice blog, keep it up!
Pokemon Wordle
Bryan, this is a well-written guide and to the point! I very much enjoyed reading this so please keep up the excellent work 🙂