Use LVM to system-upgrade a Fedora Linux server with minimal downtime

Background by Rodion Kutsaev on Unsplash (cropped and color-mapped)

Has your Fedora Linux server been End of Life for months because you can’t seem to schedule the hours of downtime required to upgrade it? There are ways to shorten that downtime to just the few minutes required for a reboot. You can do this utilizing LVM and VM technologies, all provided by Fedora.

Most users find it simple to upgrade from one Fedora Linux release to the next with the standard process. However, special cases can be handled using existing Fedora VM and LVM capabilities. This article shows one way to upgrade a Fedora Linux server using DNF while using Logical Volume Management (LVM) to keep a bootable backup in case of problems. How to run the upgrade concurrently in a virtual machine to minimize downtime will also be demonstrated. This example upgrades a Fedora Linux 33 virtual machine host to Fedora Linux 35.

The process shown here is admittedly more complex than the exceptionally easy standard upgrade process. Thus, you should have a strong grasp of how LVM and virtual machines work before attempting. Without proper skill and care, you could lose data and/or be forced to reinstall your system! If you don’t have essential Fedora Linux administration skills, it is highly recommended you stick to the supported upgrade methods only.

Prerequisite skills for this method include:

  • LVM management – understand essentials of Physical and Logical Volumes
  • VM management
    • Create and administer Virtual Machines using virsh, virt-manager, raw qemu-kvm, or cockpit
    • Configure direct kernel boot on a VM (not as hard as it sounds, but look it up ahead of time on your VM manager of choice)

Prepare the system

This example assumes you already have installed:

  • libvirt
  • qemu-kvm
  • partclone
  • python3-dnf-plugin-system-upgrade

You must have enough memory available (2G recommended but I have succeeded with 1G) to create an additional virtual machine to run the upgrade. Your system can continue to operate while you prepare for and download the upgrade, and while the upgrade runs.

Before you start, ensure your existing system is fully updated.

$ sudo dnf upgrade --refresh

Since this is a server, rebooting may be more involved than a single user system. If the kernel was updated, or critical running packages were updated, you need to reboot after notifying users as needed.

Check that your root filesystem is mounted via LVM.

$ df /
Filesystem               1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_julie-f33  25671908  14349844  10089344  59% /

$ sudo lvs
LV   VG       Attr        LSize 
f31  vg_julie -wi-a-----  15.00g
f33  vg_julie -wi-ao----  25.00g
root vg_julie -wi-ao---- 300.00g
swap vg_julie -wi-a-----   4.00g

If you used the defaults when you installed Fedora Linux, you may find the root filesystem is mounted on a LV named root. The name of your volume group will likely be different. Look at the total size of the root volume. In the example, the root filesystem is named f33 and is 25G in size. In case you were wondering, the LV named root is a Btrfs filesystem with a subvolume named home which may at some point have a subvolume for the root filesystem as well.

Next, ensure you have enough free space in LVM.

$ sudo vgs
VG       #PV #LV #SN Attr   VSize   VFree
vg_julie   1   4   0 wz--n- 464.78g 130.78g

This system has enough free space to allocate a 25G logical volume for the upgraded Fedora Linux 35 root. If you don’t, LVM management is beyond the scope of this article, but you can review a few suggestions in a previous article.

Note for Btrfs users

The root filesystem must be copied to a logical volume to boot in a virtual machine. For a Btrfs root, you could use Btrfs send to copy a Btrfs snapshot of a root subvolume to a new Btrfs formatted LV, but you must boot from the new system to copy back the upgraded system because of selinux updates. This cryptic summary probably needs another article.

Clone the root filesystem

First, allocate a new LV for the upgraded system. Make sure to use the correct name for your system’s volume group (VG). In this example it’s vg_julie. Also, make sure you allocate the same size or more (if you want to expand later… not addressed here).

$ sudo lvcreate -L25G -n f35 vg_julie
Logical volume "f35" created.

We are going to run the snapshot in a VM with networking disabled. So first, we download the upgrade packages:

$ sudo dnf system-upgrade download --releasever=35

Do not reboot yet – the actual upgrade will take place in a virtual machine!

Next, make a snapshot of your current root filesystem. This example creates a snapshot volume named f33_s.

$ sync
$ sudo lvcreate -s -L1G -n f33_s vg_julie/f33
Using default stripesize 64.00 KiB.
Logical volume "f33_s" created.

Limitations of background upgrading

It is important to realize that once a snapshot is created, data logged to the root file system will not carry over to the upgraded file system. In terms of Logging, for instance, logs that are written to the host system during the period of updating will be available on the backup LV after the upgraded LV is booted on bare metal, but not in the new LV’s logging directory. This method does not address merging them. This method works best, therefore, where critical systems write to their own LVs, rather than a sub-directory on the root filesystem. For instance, if this is a mail server, you may wish to ensure /var/spool/mail is mounted separately from the root filesystem. Personally, I symlink /var/spool/mail to /home/mailspool, keeping all mailboxes on one seperate LV filesystem, unchanged by updates to the root filesystem. Use mailq before the snapsnot to ensure no mail is queued or quarantined. You may wish to stop mail transfer service during the upgrade, or run mailq again to ensure nothing is queued before rebooting into upgraded system.

Similarly, if this Fedora Linux server runs a postgresql database server, /var/lib/pgsql should probably be on its own filesystem.

This process underscores why it is best-practice for admins to move service data to its own LV filesystem.

Final Preparation

Take note of your current system, especially:

  • Current Kernel Version
  • What version you are currently running, and what version is your target
  • Current root disk and/or UUID/Label
  • How your swap is setup

For the latter two, you can find them in your /etc/fstab. Now might be a good time to make a quick backup of that file.

Create the upgrade VM

The snapshot can now be copied to the new LV. Make sure you have the destination correct when you substitute your own volume names. If you are not careful you could delete data irrevocably. Also, make sure you are copying from the snapshot of your root, not your live root. This example has an ext4 root filesystem. Change to use your actual root filesystem type. You could also change the root filesystem type at this step by using rsync, but that will be a future article.

$ sudo partclone.ext4 -b -s /dev/vg_julie/f33_s -o /dev/vg_julie/f35 
Partclone v0.3.17
Starting to back up device(/dev/vg_julie/f33_s) to device(/dev/vg_julie/f35)
Elapsed: 00:00:01, Remaining: 00:00:00, Completed: 100.00%
Total Time: 00:00:01, 100.00% completed!
File system:  EXTFS
Device size:   26.8 GB = 6553600 Blocks
Space in use:  16.9 GB = 4136996 Blocks
Free Space:     9.9 GB = 2416604 Blocks
Block size:   4096 Byte
Elapsed: 00:07:32, Remaining: 00:00:00, Completed: 100.00%, Rate:   2.25GB/min,
current block:    6514688, total block:    6553600, Complete: 100.00%
Total Time: 00:07:32, Ave. Rate:    2.2GB/min, 100.00% completed!
Syncing... OK!
Partclone successfully cloned the device (/dev/vg_julie/f33_s) to the device (/dev/vg_julie/f35)
Cloned successfully.  

Give the new filesystem copy a unique UUID and label. This is not strictly necessary, but given that UUIDs are supposed to be unique, avoid future confusion by tagging the new filesystem. Here is how this is done for an ext4 root filesystem:

$ sudo e2fsck -f /dev/vg_julie/f35
$ sudo tune2fs -U random /dev/vg_julie/f35
$ sudo e2label /dev/vg_julie/f35 F35

Note: For Btrfs, the filesystem must be mounted to change UUID and/or LABEL.

Now remove the snapshot volume which is no longer needed:

$ sudo lvremove vg_julie/f33_s
Do you really want to remove active logical volume vg_julie/f33_s? [y/n]: y
Logical volume "f33_s" successfully removed

You may wish to make a snapshot of /home at this point if you have it mounted separately (use Clone the root filesystem for steps). Sometimes, upgraded applications make changes that are incompatible with a much older Fedora Linux version. If you need to revert, edit the /etc/fstab file on the old root filesystem to mount the snapshot on /home. Remember that when the snapshot is full, it will disappear! You may also wish to make a normal backup of /home (and other filesystems for database and mail) for good measure.

Configuring the VM to use the new root

Mount the new LV and backup your existing GRUB settings:

$ sudo mkdir /mnt/f35
$ sudo mount /dev/vg_julie/f35 /mnt/f35
$ sudo mkdir /mnt/f35/f33

Our previous article copied /boot/grub2/grub.cfg to a backup, but Fedora Linux now uses BLS – the Boot Loader System. Grub entries are in /boot/loader/entries and you generally don’t need to touch grub.cfg

Edit /mnt/f35/etc/default/grub and change the default root LV activated at boot:

GRUB_CMDLINE_LINUX=" rhgb quiet"

Copy /mnt/f35/etc/fstab to /mnt/f35/etc/fstab.f33 as a backup. Edit /mnt/f35/etc/fstab to comment out any filesystems that will not be available to the virtual machine. Only the / block device will be available. Change the root filesystem to use the new UUID or LABEL. E.g.

LABEL=F35 / ext4 defaults 1 1

The upgrade process will expect a /boot directory. If /boot is not in your root filesystem, copy it into the new LV:

$ sudo rsync -ravHXx /boot/ /mnt/f35/boot
... shows you files copied

You will need to disable systemd services in the new LV that could interfere with host. For instance, a VPN like cjdns or openvpn:

$ sudo systemctl --root=/mnt/f35 disable cjdns
$ sudo systemctl --root=/mnt/f35 disable openvpn-client@client
$ sudo systemctl --root=/mnt/f35 disable libvirtd

Remember (write down!) what you disabled, because you will need to enable them again before booting the upgraded system on bare metal! Unmount the new root filesystem:

$ sudo umount /mnt/f35

Perform the upgrade in a VM

The VM disk will need a virtio driver, so make a new version of initramfs.

$ sudo dracut --add-drivers virtio_blk /tmp/initramfs-5.14.9-100.fc33.x86_64.img

If you converted the root filesystem to a new type, you may also need to add a new filesystem driver or module. For example, after converting ext4 to Btrfs, you would need ‐‐add btrfs ‐‐add-drivers virtio_blk.

Using libvirtd is out of scope for this already long article, because as admin of a virtual host, you should know to create a virtual machine using virsh or virt-manager or your favorite front end. Create a virtual machine with a single disk volume mapped to the new root filesystem LV (/dev/vg_julie/f35 in the example). Use Direct Kernel Boot with the same initramfs name you created with dracut:

Kernel: /boot/vmlinuz-5.14.9-100.fc33.x86_64
Initramfs: /tmp/initramfs-5.14.9-100.fc33.x86_64.img
Args: root=LABEL=F35 ro

Now, start the virtual machine, and login to its console as root. Check that nothing else needs to be disabled. If all is well, reboot the VM with $ sudo dnf system-upgrade reboot on the VM console (not on your bare metal console – that would not be a total disaster, but you’ll be down while the upgrade runs). You should see the upgrade proceeding on the VM console after the VM reboots. Meanwhile, your host system continues on as usual.

If, for some reason, sudo dnf system-upgrade reboot does not work, and network is available in the VM, you can call sudo dnf system-upgrade in the VM, and it will work fairly quickly, as everything is already downloaded.

… time passes as your virtual host hums along and the VM upgrades … get some coffee and a donut while your current host continues to operate as usual.

When the upgrade is finished, the VM will reboot again. Log in as root on the VM console and check things out. (Note: if you are using a GUI, look for “Send Key” to get a root console.) You will still be running the old kernel (f33 in this example), because you are using Direct Kernel Boot for the VM. If anything went wrong, you can destroy the VM, and start over at taking a snapshot of the current root filesystem.

If it looks ok, run sudo systemctl poweroff to shutdown the VM.

Prepare the VM host to boot the upgraded root filesystem

The initramfs generated inside the VM does not include the drivers you need for bare metal. In addition to using virtio_blk instead of an actual disk controller driver, a VM host probably needs LVM and RAID drivers. In the interest of simplicity, we will generate an initramfs with all drivers and modules. This creates a large image, around 100 Mbyte, but there is no drawback other than disk space used on a typically limited /boot filesystem. Experts: it is possible to use lsinitrd and dracut options ‐‐add and ‐‐add-drivers instead to create a smaller image if you know what is needed.

Copy the new boot loader entry and kernel files to the real /boot.

Change the kernel version to the one just installed by the system-upgrade. You can run dracut without ‐‐force initially to ensure it is overwriting what you expect.

$ sudo mount /dev/vg_julie/f35 /mnt/f35 # make sure your VM is truly not running
$ sudo mount -t proc /proc /mnt/f35/proc
$ sudo chroot /mnt/f35
# ls -l /boot/*fc35*
-rw-r--r--. 1 root root   236665 Sep 30 08:10 /boot/config-5.14.9-300.fc35.x86_64
-rw-------. 1 root root 35571851 Oct  6 15:15 /boot/initramfs-5.14.9-300.fc35.x86_64.img
lrwxrwxrwx. 1 root root       46 Oct  6 15:15 /boot/symvers-5.14.9-300.fc35.x86_64.gz -> /lib/modules/5.14.9-300.fc35.x86_64/symvers.gz
-rw-------. 1 root root  5849998 Sep 30 08:10 /boot/
-rwxr-xr-x. 1 root root 11032912 Sep 30 08:10 /boot/vmlinuz-5.14.9-300.fc35.x86_64
# dracut -N --kver=5.14.9-300.fc35.x86_64 --force
# exit
$ sudo cp -p /mnt/f35/boot/*-5.14.9-300.fc35* /boot
$ sudo cp -p /mnt/f35/boot/loader/entries/*-5.14.9-300.fc35.x86_64.conf /boot/loader/entries

If you mount /boot separately (very likely), then you need to edit the loader entry copied from the VM to remove /boot from the kernel and initrd. Edit /boot/loader/entries/*-5.14.9-300.fc35.x86_64.conf and change:

linux /boot/vmlinuz-5.14.9-300.fc35.x86_64
initrd /boot/initramfs-5.14.9-300.fc35.x86_64.img

... to ...

linux /vmlinuz-5.14.9-300.fc35.x86_64
initrd /initramfs-5.14.9-300.fc35.x86_64.img 

Re-enable services you disabled for the VM. If you didn’t write them down, check ~/.bash_history or the equivalent for your shell.

$ sudo systemctl --root=/mnt/f35 enable cjdns
$ sudo systemctl --root=/mnt/f35 enable openvpn-client@client
$ sudo systemctl --root=/mnt/f35 enable libvirtd

Edit fstab in your new VM (/mnt/f35/etc/fstab) to include all the filesystems you need, but commented out to run in a VM.

IF (and only if) /boot is separately mounted on your host (shows in df), rename and recreate /boot in the upgraded root – this will be a handy backup. (Otherwise, it will get mounted over and the files hidden.)

$ sudo mv /mnt/f35/boot /mnt/f35/bootVM
$ sudo mkdir /mnt/f35/boot

Finally, reboot into the upgraded system:

$ sudo systemctl reboot
... system reboots
$ df / /f33
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_julie-f35  25671908 15948668   8490520  66% /
/dev/mapper/vg_julie-f33  15350728 14540288     64256 100% /f33

Check that required services are running. If there are any insurmountable problems, you can reboot back into Fedora Linux 33!

Finally, if you are running well, remove the VM you used to upgrade with, so you don’t corrupt your host system by accidentally starting it while you are using the LV on the host system. Make sure you don’t delete the LV in the process.

For System Administrators


  1. As usual, a well done article!

  2. When there are 2 vm hosts at a location, you can use the drbd driver to form a poor man’s cluster. Then you can migrate VMs to the slave (where they will be slower) and switch it to primary. The now idle host can up upgraded, and VMs migrated back.

  3. Tim Evans

    Sun invented this in 2002. It was called “Live Update.”

    • Most of the stuff I write about was invented last century.

    • There is no claim of invention; all the tools exist in Fedora, which is the main point, and noted at top of article. This is about a way to utilize the cool tools that exist in Fedora, with step-by-step instructions, that result in extremely safe backups and clean rollbacks (requirements for admin for decades) all in a way that allows almost of the work to be done in “prep mode”, minimizing downtime.

  4. Esc

    Thank You,
    great post. I am asking for more entries of this type about lvm, virtualization, file systems, btrfs, xfs

  5. Thomas

    might also be worth to look into: Fedora CoreOS (“Server”) or Fedora Silverblue (“Workstation”). Initial work to put things in containers, but decouples system upgrades from apps/services. Makes upgrading hassle free.

  6. Clive Wi


    There is a lot of information, and I am sure that the people writing the artical understand what is going on, but it is not easy to translate that information on to a system that looks very different from theirs.

    You lost me about halfway through because I didn’t know what to expect, so can I ask for more information on how you can see (which command to use) what has happen to your system before and after the command has executed.

    A better explanation of what some of the parameters being passed on to the commands are, and why we are using them.

    And finally, have you consider a video because a picture speaks a thousand words.

  7. Einer

    Good article….. brought back some old memories 🙂

    But ….. (yeah, I know….) …….
    I thought the whole idea of having a VM host was to have at least 2 using a shared networked FS (iSCSI, NFS, gluster, luster ……) with the VM images accessible to both VM hosts. In KVM/QEMU, all you would need to do to maintain uptime of the VM is live migrate the VM from the host you want to upgrade, do the upgrade and then migrate back (if you want 🙂 ) ……

    But, like I said, good article 🙂

    • That’s not the whole point, but is definitely nice to have. We had dual servers at the home office for a while using DRBD. But customers would typically mirror to a remote site – which is not so quick for live migration.

      Currently, I do not have dual servers at home – or rather the services have expanded to fill both servers for decent performance. The example was actually logged while upgrading a user laptop while they were working on a big writing project – so it’s not just for servers.

      At some point, I think the system-upgrade will be able to run in a container (and be able to use a btrfs snapshot).

  8. Ken

    What is the function “vg_juliebak” in the partclone.ext4 text block?

  9. rockcat

    Cool idea. Have you ever used an AIX operating system family? There is a alt_disk_copy method upgrade. I’d like to see something similar on Linux. Of course there is also a alternate method which not require a 2nd disk (it uses a something similiar to snapshot). Anyway, there is a lot of system which can performe update or even upgrade almost online. I don’t know why linux is so poor in this matter. Especially with BTRFS or having a 2 disks for / it. It should be possible to check out a current system point and upgrade a current os (or snapshot), set which image to boot and reboot. Or something like in alt_disk – create a OS copy on 2nd drive, update, set up a disk for boot and reboot…

Comments are Closed

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat. Fedora Magazine aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. The Fedora logo is a trademark of Red Hat, Inc. Terms and Conditions