Use the DNF local plugin to speed up your home lab

Photo by Sven Hornburg on Unsplash


If you are a Fedora Linux enthusiast or a developer working with multiple instances of Fedora Linux then you might benefit from the DNF local plugin. An example of someone who would benefit from the DNF local plugin would be an enthusiast who is running a cluster of Raspberry Pis. Another example would be someone running several virtual machines managed by Vagrant. The DNF local plugin reduces the time required for DNF transactions. It accomplishes this by transparently creating and managing a local RPM repository. Because accessing files on a local file system is significantly faster than downloading them repeatedly, multiple Fedora Linux machines will see a significant performance improvement when running dnf with the DNF local plugin enabled.

I recently started using this plugin after reading a tip from Glenn Johnson (aka glennzo) in a 2018 post. While working on a Raspberry Pi based Kubernetes cluster running Fedora Linux and also on several container-based services, I winced with every DNF update on each Pi or each container that downloaded a duplicate set of rpms across my expensive internet connection. In order to improve this situation, I searched for a solution that would cache rpms for local reuse. I wanted something that would not require any changes to repository configuration files on every machine. I also wanted it to continue to use the network of Fedora Linux mirrors. I didn’t want to use a single mirror for all updates.

Prior art

An internet search yields two common solutions that eliminate or reduce repeat downloads of the same RPM set – create a private Fedora Linux mirror or set up a caching proxy.

Fedora provides guidance on setting up a private mirror. A mirror requires a lot of bandwidth and disk space and significant work to maintain. A full private mirror would be too expensive and it would be overkill for my purposes.

The most common solution I found online was to implement a caching proxy using Squid. I had two concerns with this type of solution. First, I would need to edit repository definitions stored in /etc/yum.repo.d on each virtual and physical machine or container to use the same mirror. Second, I would need to use http and not https connections which would introduce a security risk.

After reading Glenn’s 2018 post on the DNF local plugin, I searched for additional information but could not find much of anything besides the sparse documentation for the plugin on the DNF documentation web site. This article is intended to raise awareness of this plugin.

About the DNF local plugin

The online documentation provides a succinct description of the plugin: “Automatically copy all downloaded packages to a repository on the local filesystem and generating repo metadata”. The magic happens when there are two or more Fedora Linux machines configured to use the plugin and to share the same local repository. These machines can be virtual machines or containers running on a host and all sharing the host filesystem, or separate physical hardware on a local area network sharing the file system using a network-based file system sharing technology. The plugin, once configured, handles everything else transparently. Continue to use dnf as before. dnf will check the plugin repository for rpms, then proceed to download from a mirror if not found. The plugin will then cache all rpms in the local repository regardless of their upstream source – an official Fedora Linux repository or a third-party RPM repository – and make them available for the next run of dnf.

Install and configure the DNF local plugin

Install the plugin using dnf. The createrepo_c packages will be installed as a dependency. The latter is used, if needed, to create the local repository.

sudo dnf install python3-dnf-plugin-local

The plugin configuration file is stored at /etc/dnf/plugins/local.conf. An example copy of the file is provided below. The only change required is to set the repodir option. The repodir option defines where on the local filesystem the plugin will keep the RPM repository.

enabled = true
# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local

# Createrepo options. See man createrepo_c
# This option lets you disable createrepo command. This could be useful
# for large repositories where metadata is priodically generated by cron
# for example. This also has the side effect of only copying the packages
# to the local repo directory.
enabled = true

# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir

# quiet = true

# verbose = false

Change repodir to the filesystem directory where you want the RPM repository stored. For example, change repodir to /srv/repodir as shown below.

# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local
repodir = /srv/repodir

Finally, create the directory if it does not already exist. If this directory does not exist, dnf will display some errors when it first attempts to access the directory. The plugin will create the directory, if necessary, despite the initial errors.

sudo mkdir -p /srv/repodir

Repeat this process on any virtual machine or container that you want to share the local repository. See the use cases below for more information. An alternative configuration using NFS (network file system) is also provided below.

How to use the DNF local plugin

After you have installed the plugin, you do not need to change how you use dnf. The plugin will cause a few additional steps to run transparently behind the scenes whenever dnf is called. After dnf determines which rpms to update or install, the plugin will try to retrieve them from the local repository before trying to download them from a mirror. After dnf has successfully completed the requested updates, the plugin will copy any rpms downloaded from a mirror to the local repository and then update the local repository’s metadata. The downloaded rpms will then be available in the local repository for the next dnf client.

There are two points to be aware of. First, benefits from the local repository only occur if multiple machines share the same architecture (for example, x86_64 or aarch64). Virtual machines and containers running on a host will usually share the same architecture as the host. But if there is only one aarch64 device and one x86_64 device there is little real benefit to a shared local repository unless one of the devices is constantly reset and updated which is common when developing with a virtual machine or container. Second, I have not explored how robust the local repository is to multiple dnf clients updating the repository metadata concurrently. I therefore run dnf from multiple machines serially rather than in parallel. This may not be a real concern but I want to be cautious.

The use cases outlined below assume that work is being done on Fedora Workstation. Other desktop environments can work as well but may take a little extra effort. I created a GitHub repository with examples to help with each use case. Click the Code button at to clone the repository or to download a zip file.

Use case 1: networked physical machines

The simplest use case is two or more Fedora Linux computers on the same network. Install the DNF local plugin on each Fedora Linux machine and configure the plugin to use a repository on a network-aware file system. There are many network-aware file systems to choose from. Which file system you will use will probably be influenced by the existing devices on your network.

For example, I have a small Synology Network Attached Storage device (NAS) on my home network. The web admin interface for the Synology makes it very easy to set up a NFS server and export a file system share to other devices on the network. NFS is a shared file system that is well supported on Fedora Linux. I created a share on my NAS named nfs-dnf and exported it to all the Fedora Linux machines on my network. For the sake of simplicity, I am omitting the details of the security settings. However, please keep in mind that security is always important even on your own local network. If you would like more information about NFS, the online Red Hat Enable Sysadmin magazine has an informative post that covers both client and server configurations on Red Hat Enterprise Linux. They translate well to Fedora Linux.

I configured the NFS client on each of my Fedora Linux machines using the steps shown below. In the below example, quga.lan is the hostname of my NAS device.

Install the NFS client on each Fedora Linux machine.

$ sudo dnf install nfs-utils

Get the list of exports from the NFS server:

$ showmount -e quga.lan
Export list for quga.lan:
/volume1/nfs-dnf  pi*.lan

Create a local directory to be used as a mount point on the Fedora Linux client:

$ sudo mkdir -p /srv/repodir

Mount the remote file system on the local directory. See man mount for more information and options.

$ sudo mount -t nfs -o vers=4 quga.lan:/nfs-dnf /srv/repodir

The DNF local plugin will now work until as long as the client remains up. If you want the NFS export to be automatically mounted when the client is rebooted, then you must to edit /etc/fstab as demonstrated below. I recommend making a backup of /etc/fstab before editing it. You can substitute vi with nano or another editor of your choice if you prefer.

$ sudo vi /etc/fstab

Append the following line at the bottom of /etc/fstab, then save and exit.

quga.lan:/volume1/nfs-dnf /srv/repodir nfs defaults,timeo=900,retrans=5,_netdev 0 0

Finally, notify systemd that it should rescan /etc/fstab by issuing the following command.

$ sudo systemctl daemon-reload

NFS works across the network and, like all network traffic, may be blocked by firewalls on the client machines. Use firewall-cmd to allow NFS-related network traffic through each Fedora Linux machine’s firewall.

$ sudo firewall-cmd --permanent --zone=public --allow-service=nfs

As you can imagine, replicating these steps correctly on multiple Fedora Linux machines can be challenging and tedious. Ansible automation solves this problem.

In the rpi-example directory of the github repository I’ve included an example Ansible playbook (configure.yaml) that installs and configures both the DNF plugin and the NFS client on all Fedora Linux machines on my network. There is also a playbook (update.yaml) that runs a DNF update across all devices. See this recent post in Fedora Magazine for more information about Ansible.

To use the provided Ansible examples, first update the inventory file (inventory) to include the list of Fedora Linux machines on your network that you want to managed. Next, install two Ansible roles in the roles subdirectory (or another suitable location).

$ ansible-galaxy install --roles-path ./roles -r requirements.yaml

Run the configure.yaml playbook to install and configure the plugin and NFS client on all hosts defined in the inventory file. The role that installs and configures the NFS client does so via /etc/fstab but also takes it a step further by creating an automount for the NFS share in systemd. The automount is configured to mount the share only when needed and then to automatically unmount. This saves network bandwidth and CPU cycles which can be important for low power devices like a Raspberry Pi. See the github repository for the role and for more information.

$ ansible-playbook -i inventory configure.yaml

Finally, Ansible can be configured to execute dnf update on all the systems serially by using the update.yaml playbook.

$ ansible-playbook -i inventory update.yaml

Ansible and other automation tools such as Puppet, Salt, or Chef can be big time savers when working with multiple virtual or physical machines that share many characteristics.

Use case 2: virtual machines running on the same host

Fedora Linux has excellent built-in support for virtual machines. The Fedora Project also provides Fedora Cloud base images for use as virtual machines. Vagrant is a tool for managing virtual machines. Fedora Magazine has instructions on how to set up and configure Vagrant. Add the following line in your .bashrc (or other comparable shell configuration file) to inform Vagrant to use libvirt automatically on your workstation instead of the default VirtualBox.


In your project directory initialize Vagrant and the Fedora Cloud image (use 34-cloud-base for Fedora Linux 34 when available):

$ vagrant init fedora/33-cloud-base

This creates a Vagrant file in the project directory. Edit the Vagrant file to look like the example below. DNF will likely fail with the default memory settings for libvirt. So the example Vagrant file below provides additional memory to the virtual machine. The example below also shares the host /srv/repodir with the virtual machine. The shared directory will have the same path in the virtual machine – /srv/repodir. The Vagrant file can be downloaded from github.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# define repo directory; same name on host and vm
REPO_DIR = "/srv/repodir"

Vagrant.configure("2") do |config| = "fedora/33-cloud-base"

  config.vm.provider :libvirt do |v|
    v.memory = 2048
  #  v.cpus = 2

  # share the local repository with the vm at the same location
  config.vm.synced_folder REPO_DIR, REPO_DIR

  # ansible provisioner - commented out by default
  # the ansible role is installed into a path defined by
  # ansible.galaxy_roles-path below. The extra_vars are ansible
  # variables passed to the playbook.
#  config.vm.provision "ansible" do |ansible|
#    ansible.verbose = "v"
#    ansible.playbook = "ansible/playbook.yaml"
#    ansible.extra_vars = {
#      repo_dir: REPO_DIR,
#      dnf_update: false
#    }
#    ansible.galaxy_role_file = "ansible/requirements.yaml"
#    ansible.galaxy_roles_path = "ansible/roles"
#  end

Once you have Vagrant managing a Fedora Linux virtual machine, you can install the plugin manually. SSH into the virtual machine:

$ vagrant ssh

When you are at a command prompt in the virtual machine, repeat the steps from the Install and configure the DNF local plugin section above. The Vagrant configuration file should have already made /srv/repodir from the host available in the virtual machine at the same path.

If you are working with several virtual machines or repeatedly re-initiating a new virtual machine then some simple automation becomes useful. As with the network example above, I use ansible to automate this process.

In the vagrant-example directory on github, you will see an ansible subdirectory. Edit the Vagrant file and remove the comment marks under the ansible provisioner section. Make sure the ansible directory and its contents (playbook.yaml, requirements.yaml) are in the project directory.

After you’ve uncommented the lines, the ansible provisioner section in the Vagrant file should look similar to the following:

  # ansible provisioner 
  # the ansible role is installed into a path defined by
  # ansible.galaxy_roles-path below. The extra_vars are ansible
  # variables passed to the playbook.
  config.vm.provision "ansible" do |ansible|
    ansible.verbose = "v"
    ansible.playbook = "ansible/playbook.yaml"
    ansible.extra_vars = {
      repo_dir: REPO_DIR,
      dnf_update: false
    ansible.galaxy_role_file = "ansible/requirements.yaml"
    ansible.galaxy_roles_path = "ansible/roles"

Ansible must be installed (sudo dnf install ansible). Note that there are significant changes to how Ansible is packaged beginning with Fedora Linux 34 (use sudo dnf install ansible-base ansible-collections*).

If you run Vagrant now (or reprovision: vagrant provision), Ansible will automatically download an Ansible role that installs the DNF local plugin. It will then use the downloaded role in a playbook. You can vagrant ssh into the virtual machine to verify that the plugin is installed and to verify that rpms are coming from the DNF local repository instead of a mirror.

Use case 3: container builds

Container images are a common way to distribute and run applications. If you are a developer or enthusiast using Fedora Linux containers as a foundation for applications or services, you will likely use dnf to update the container during the development/build process. Application development is iterative and can result in repeated executions of dnf pulling the same RPM set from Fedora Linux mirrors. If you cache these rpms locally then you can speed up the container build process by retrieving them from the local cache instead of re-downloading them over the network each time. One way to accomplish this is to create a custom Fedora Linux container image with the DNF local plugin installed and configured to use a local repository on the host workstation. Fedora Linux offers podman and buildah for managing the container build, run and test life cycle. See the Fedora Magazine post How to build Fedora container images for more about managing containers on Fedora Linux.

Note that the fedora_minimal container uses microdnf by default which does not support plugins. The fedora container, however, uses dnf.

A script that uses buildah and podman to create a custom Fedora Linux image named myFedora is provided below. The script creates a mount point for the local repository at /srv/repodir. The below script is also available in the container-example directory of the github repository. It is named

set -x

# bash script that creates a 'myfedora' image from fedora:latest.
# Adds dnf-local-plugin, points plugin to /srv/repodir for local
# repository and creates an external mount point for /srv/repodir
# that can be used with a -v switch in podman/docker

# custom image name

# scratch conf file name

# location of plugin config file

# location of repodir on container

# create scratch plugin conf file for container
# using repodir location as set in container_repodir
cat <<EOF > "$tmp_name"
enabled = true
repodir = $container_repodir
enabled = true
# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir
# quiet = true
# verbose = false

# pull
podman pull

#start the build
mkdev=$(buildah from fedora:latest)

# tag author
buildah config --author "$USER" "$mkdev"

# install dnf-local-plugin, clean
# do not run update as local repo is not operational
buildah run "$mkdev" -- dnf --nodocs -y install python3-dnf-plugin-local createrepo_c
buildah run "$mkdev" -- dnf -y clean all

# create the repo dir
buildah run "$mkdev" -- mkdir -p "$container_repodir"

# copy the scratch plugin conf file from host
buildah copy "$mkdev" "$tmp_name" "$configuration_name"

# mark container repodir as a mount point for host volume
buildah config --volume "$container_repodir" "$mkdev"

# create myfedora image
buildah commit "$mkdev" "localhost/$custom_name:latest"

# clean up working image
buildah rm "$mkdev"

# remove scratch file
rm $tmp_name

Given normal security controls for containers, you usually run this script with sudo and when you use the myFedora image in your development process.

$ sudo ./

To list the images stored locally and see both fedora:latest and myfedora:latest run:

$ sudo podman images

To run the myFedora image as a container and get a bash prompt in the container run:

$ sudo podman run -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

Podman also allows you to run containers rootless (as an unprivileged user). Run the script without sudo to create the myfedora image and store it in the unprivileged user’s image repository:

$ ./

In order to run the myfedora image as a rootless container on a Fedora Linux host, an additional flag is needed. Without the extra flag, SELinux will block access to /srv/repodir on the host.

$ podman run --security-opt label=disable -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

By using this custom image as the base for your Fedora Linux containers, the iterative building and development of applications or services on them will be faster.

Bonus Points – for even better dnf performance, Dan Walsh describes how to share dnf metadata between a host and container using a file overlay (see https://www. This technique will work in combination with a shared local repository only if the host and the container use the same local repository. The dnf metadata cache includes metadata for the local repository under the name _dnf_local.

I have created a container file that uses buildah to do a dnf update on a fedora:latest image. I’ve also created a container file to repeat the process using a myfedora image. There are 53 MB and 111 rpms in the dnf update. The only difference between the images is that myfedora has the DNF local plugin installed. Using the local repository cut the elapse time by more than half in this example and saves 53MB of internet bandwidth consumption.

With the fedora:latest image the command and elapsed time is:

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O - f Containerfile.3 .
128 Elapsed Time: 0:48.06

With the myfedora image the command and elapsed time is less than half of the base run. The :Z on the -v volume below is required when running the container on a SELinux-enabled host.

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O -v /srv/repodir:/srv/repodir:Z -f Containerfile.4 .
133 Elapsed Time: 0:19.75

Repository management

The local repository will accumulate files over time. Among the files will be many versions of rpms that change frequently. The kernel rpms are one such example. A system upgrade (for example upgrading from Fedora Linux 33 to Fedora Linux 34) will copy many rpms into the local repository. The dnf repomanage command can be used to remove outdated rpm archives. I have not used the plugin long enough to explore this. The interested and knowledgeable reader is welcome to write an article about the dnf repomanage command for Fedora Magazine.

Finally, I keep the x86_64 rpms for my workstation, virtual machines and containers in a local repository that is separate from the aarch64 local repository for the Raspberry Pis and (future) containers hosting my Kubernetes cluster. I have separated them for reasons of convenience and happenstance. A single repository location should work across all architectures.

An important note about Fedora Linux system upgrades

Glenn Johnson has more than four years experience with the DNF local plugin. On occasion he has experienced problems when upgrading to a new release of Fedora Linux with the DNF local plugin enabled. Glenn strongly recommends that the enabled attribute in the plugin configuration file /etc/dnf/plugins/local.conf be set to false before upgrading your systems to a new Fedora Linux release. After the system upgrade, re-enable the plugin. Glenn also recommends using a separate local repository for each Fedora Linux release. For example, a NFS server might export /volume1/dnf-repo/33 for Fedora Linux 33 systems only. Glenn hangs out on – an independent online resource for Fedora Linux users.


The DNF local plugin has been beneficial to my ongoing work with a Fedora Linux based Kubernetes cluster. The containers and virtual machines running on my Fedora Linux desktop have also benefited. I appreciate how it supplements the existing DNF process and does not dictate any changes to how I update my systems or how I work with containers and virtual machines. I also appreciate not having to download the same set of rpms multiple times which saves me money, frees up bandwidth, and reduces the load on the Fedora Linux mirror hosts. Give it a try and see if the plugin will help in your situation!

Thanks to Glenn Johnson for his post on the DNF local plugin which started this journey, and for his helpful reviews of this post.

For Developers For System Administrators


  1. People who are interested in this may also find Matthew Almond’s presentation “Speeding up DNF and RPM” interesting. He is using reflinks in some very clever ways to deduplicate RPMS automatically during installation. Best of all, dnf-plugin-cow appears to be slated for Fedora Workstation 35!

    An explanation of reflink excerpted from reflink(3c) What is it? Why do I care? And how can I use it?:
    “… Copying files is usually a case of telling the file system to read the blocks from disk, then telling them to write those same blocks to a different bit of the disk. reflink(3) allows you to tell the file system to simply create a new mapping to those blocks from a file system object … The overall effect of this is not only space savings … but cpu usage and time are also reduced. This is because all you’re doing is getting the file system to create more meta data to track the new file and the changes. The reflink copy is near instantaneous and uses virtually no CPU resources, as it’s not doing the expensive read/write cycle. …”

  2. Would fetching rpm packages over http instead of https really be a security risk? dnf does its own checks of the GPG signatures of packages (if gpgcheck=1 is set, which it is by default), so if an attacker tampers with the http connection and feeds us a compromised package, dnf will complain. (Yes the admin can choose to override that complaint, but doing so would be just like overriding an https certificate error, so I think we have to rely on the admin being sensible.) So it seems downgrading to http will give no additional integrity risk if dnf checks gpg. It might make it easier for a spy to figure out which packages you are installing (although they can probably guess most of that with traffic analysis of the https, and it’s unlikely to yield useful intelligence, especially if you’ve installed packages you don’t really use). It might make it easier for a network operator to censor specific packages. But it won’t help an attacker to compromise your packages, as long as you let your dnf do its gpg checks and never override a failed integrity check.

    • Thanks for taking the time to read and comment. You make a fair point with respect to packages. I do not know if the repository xml based catalogs are also protected by a secondary security layer like a GPG signature. I have thoroughly ingested the ‘use https’ mantra so I could well be too cautious.

      • laolux

        Yes, you are right, the repository xml data of Fedora repositories is not signed and thus not checked. This can be seen by the corresponding


        entry in the repo file. Setting it to


        will cause dnf to fail because of failed GPG verification.
        Now, why is that a problem? Well, when using http instead of https, then an attacker can easily serve you crafted repository metadata which for example only exclude one vulnerable package from being updated. This would likely go unnoticed for a long time. When using https, then the attacker can only suppress all updates by blocking the connection, but chances are high that this would be noticed rather sooner than later.

  3. local

    Nice article. Small typo “Glenn hangs out on” should be “Glenn hangs out on”

  4. J. Gerardo

    Great work !!

  5. Rodd Clarkson

    Could you make this work using an ssh based share.

    It’s so trivial to create and so easy to use it would make this almost simple.

    The SSH:// protocol in the file manager is just magnificent for quickly creating a file share.

    • As long as dnf sees the local repository location as part of the computers file system (local or network share) it should not matter. I have not used ssh for mounted file shares – just for scp file copies (as well as remote terminal access).

      • Rodd Clarkson

        so would it be as simple as putting ssh://user@host:/path/to/rpms in the config file, or would you need to mount the file system, and then point to the mountpoint?

        • I believe it would be the latter – create a mount point so that the file sharing technology is transparent to the plugin.

  6. This is awesome and exactly what I’ve been looking for! Unfortunately I’ve run into this bit of confusion:

    • Thanks for reading. Is the createrepo_c rpm installed? The curl error (Curl error (37): Couldn’t read a file:// … ) is normal but confusing the first time the plugin is used and the local repo does not exist. My experience has been that the plugin displays the error, dnf downloads the rpms then the plugin will create the directory if needed and then create the repo metadata and copy the rpms. All this requires that the createrepo_c rpm be installed.


    • Glenn Johnnos

      Chris, try running createrepo /path/to/your/local/repo/. Your error should disappear.

      • This does fix the error, but ‘dnf update –downloadonly’ still downloads all the RPMs to /var/cache/dnf/updates-0e22a1f5a0a34771/packages/ rather than the path I’ve specified in /etc/dnf/plugins/local.conf and the createrepo command.

        • If you enable ‘verbose = true’ in the [createrepo] section of the configuration file you should see more output from the plugin. Once the rpms are updated or installed, the plugin initiates an action to then copy them to the local repo and update the local repo metadata. I speculate that -downloadonly stops dnf before the plugin is called for it’s postprocessing.

          • Yep! That is exactly my mistake, I didn’t realize that the RPMs aren’t copied into repodir path until after they’re installed successfully. It’s pretty much the last step of the update process. It’s all working now and I’ve updated the bug I filed.

  7. Brendan

    This is a great tip and very easy to implement. I use Syncthing to synchronize the repo directory on various Fedora devices. I setup /Disk2/DNF on all of the devices and share it between them and my 2 backup servers, which are also running Syncthing. That way each device can get changes from the backup servers (Odroid HC2) when the other Fedora devices are sleeping.

  8. Ben Matteson

    This is fantastic. I’ve been looking for something like this for some time. Thank you for implementing this! Does this work with dnf system-upgrades too?

  9. Ben Matteson

    Sorry, didn’t read to the end. I see that this isn’t recommended for system-upgrades. (Moderators: feel free to not post my comment, or remove my question about system-upgrades.)

    • Brad Smith

      I have been wrestling with the WordPress comment system over the weekend and WP won. Thanks for the comments. I cannot claim any credit for the plugin which has been around for a while (a similar plugin for yum exists). But it solved a big part of my dnf update requirements and could be useful for others.

    • Glenn Johnson

      Hi Ben. If you’re using the plugin and plan on upgrading the system I believe that it is best to disable it until the upgrade is complete. Once complete, re-enable it and point it to a new folder, not the one from the previous release.

      • Ben Matteson


        Thanks for the reply, and thanks for all the work you’ve done to make this possible! I read further and saw that I shoulid disable this plugin, and I will do so (although I will probably forget the first time I dnf system-upgrade). Any chance it would work to point dnf local to a completely new empty directory when doing the first dnf system-upgrade download and sharing that new file system (that was only used for the upgrades)?

        • Glenn Johnson

          I’ve never tried that but why not? The issue with dnf-local being enabled during system upgrade has more to do with, lets say, new F34 packages (assuming upgrade from 33 to 34) conflicting with packages for existing F33 systems.

          I’ve upgraded to F34 while forgetting to disable dnf-local and have then had problems doing simple dnf upgrade –refresh on an F33 box because of package conflicts.

          If you create an empty dnf-local repo for the yet to be upgraded system and point the plugin to said folder I don’t see there being any issues at all.

  10. Thank you for this great article! Can the plugin be used with anaconda and a kickstart file? I often run tests that completely reinstall Fedora VMs. It would be great if most of the necessary packages could be obtained locally.

    • Thanks Daniel. I do not know the answer to your question as I do not use kickstart. If you can get the plugin install with the correct configuration then I do not see why not.

  11. Göran Uddeborg

    Thanks for bringing this to my attention! I’ve been using the squid method until now, but this seems much better.

    I have a problem though, as I quite often do a couple of dnf operations after each other on a number of machines. Because of the metadata caching, only the first operation takes advantage of the dnf-local plugin. Is there any way to tweak


    for the _dnf_local repository? I don’t see anything in the documentation, but maybe I’m missing something. It seems to me it would make sense to never use the cache for this particular repository (


    , while I obviously don’t want to do that globally.

    • Thanks Göran. What I do with my cluster of Raspberry Pis running Fedora, is to run ‘dnf makecache –repo=_dnf_local’ on the Pi before running dnf. This will explicitly refresh the metadata from the dnf local repo on the Fedora machine so that the subsequent ‘dnf update’ will recognize all available rpms. There is an ansible example in the github repo for this article if interested (see rpi-examples/update.yaml).

      • Göran Uddeborg

        Thanks for that tip too! I’m a little surprised it works, since the manual says ‘makecache’ tries to avoid downloading e.g. when the local metadata hasn’t expired. But you are right, it does work.

        • ‘makecache’ appears to check the local expire setting for metadata and also the time stamp on metadata in the repository. It is the latter which helps in this scenario – one machine adds new rpms to the local repository and refreshes the repository metadata, then the subsequent machines can pickup the fresh metadata with ‘makecache’.

  12. This is just dying to be combined with mDNS and DNS-SD so you can announce and discover repository services hosted on your local network.

  13. jgoutin

    Thanks for that tip! This is a very interesting plugin, I’ll use it for local work with containers and VM.

    The only potential drawback I see is the requirement of a share where all clients have write access. This may be a security issue, but I assume DNF-local verify the cached package signature before install.

    Concerning the Squid method. It is possible to use it with HTTPS (Using the TLS Bump feature), without modifying all client “.repo” files and without pinning to a single mirror.

    The mirror handling can be done server side using the “store_id_program” option to ensure all mirrors hit the same package cache. This may be configured manually, but I personally use a Python script that run daily to keep the mirror list up to date.

    The only client configuration required are to modify “/etc/dnf/dnf.conf” to specify the proxy (“proxy=squid_host:3128”), and if using HTTPS adding the required Squid CA (“sslcacert=/etc/pki/tls/certs/squid_ca.crt”).

    For more information, there is an Ansible role of this configuration here:

    • You are welcome. Really nice work on the ansible squid role. I have it bookmarked. Properly configured, a proxy like squid, hits all of my top requirements – minimize internet bandwidth consumption, minimal configuration on each Fedora client, and respect for the Fedora mirror community. A proxy, if properly configured, can also cache the upstream repository metadata which the plugin does not do. This can be a significant time saver if implemented correctly.

      As a side note, I too like to start with Fedora minimal for services like the squid proxy. If appropriate, I spin them off as containers running on a docker macvlan with my Synology NAS as docker host.


  14. Joao Rodrigues

    Does this plugin works when doing a dnf system-upgrade? I have a few virtual machines running fedora 32 that I wanted to upgrade.

  15. Joao – I do not have personal experience, yet, with using the plugin across an upgrade. Glenn Johnson recommends, in responding to a similar question from Ben, disabling the plugin. I have several F33 machines in my home lab that I am going to experiment with but have not done so yet.

    • Joao Rodrigues

      I got it “kind of” working with a couple of hacks.

      Here’s my setup:

      /var/lib/dnf/plugins/local is an nfs mount
      on /etc/dnf/plugins/local.conf I have “repodir = /var/lib/dnf/plugins/local/34”

      First populate the repodir:

      # dnf system-upgrade download --releasever=34 --downloadonly --downloaddir=/var/lib/dnf/plugins/local/34 --disablerepo=_dnf_local
      # createrepo_c --update --unique-md-filenames /var/lib/dnf/plugins/local/34

      Then make sure the dnf-system-upgrade.service only starts after the nfs share is mounted

      # systemctl edit --full dnf-system-upgrade.service

      edit the “Requires” line: var-lib-dnf-plugins-local.mount

      Now run the system-upgrade download (yes, again!) and reboot

      # dnf system-upgrade download --releasever=34
      # dnf system-upgrade reboot
  16. Joao Rodrigues

    (messed up the markdown in the previous comment — this really needs a preview comment button)

    If you want to remove the outdated rpm archives, a way to do that is:

    # dnf repomanage –old /var/lib/dnf/plugins/local | xargs rm
    # createrepo_c –update –unique-md-filenames /var/lib/dnf/plugins/local

    replace /var/lib/dnf/plugins/local with your repodir

Comments are Closed

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat. Fedora Magazine aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. The Fedora logo is a trademark of Red Hat, Inc. Terms and Conditions