NetworkManager is the default service in Fedora for interfacing with the low level networking in the Kernel. It was created to provide a high-level interface for initializing and configuring networking on a system without shell scripts. Over the past few Fedora releases, the NetworkManager developers have put in a lot of effort to make it even better. This article covers some of the major improvements that have been implemented in NetworkManager over the past few Fedora releases.
A brief history
The past few Fedora releases have highlighted the incredible amount of effort there has been in developing NetworkManager. Unfortunately NetworkManager has the legacy of Red Hat Enterprise Linux 6 (NM 0.8.1) in the minds of many people. The version in Red Hat Enterprise Linux 6 was pretty much only capable of handling a WiFi network and in all other scenarios had to be disabled and/or removed (the latter often being preferable).
Similar to how services used to be started and stopped by a series of shell scripts, network configuration in Fedora also used to be handled via several different shell scripts. These still exist as the “legacy network service” but they are fragile and have no concept of state. Anyone who has carried out changes to bonding or bridging knows the pain of carefully manually unpicking the present configuration with ifenslave, brctl or more recently ip and then hoping all the ifcfg-* files are correct. Or just bite the bullet and carry out that reboot.
Just as initially upstart and later systemd marked the end of running a series of shell script to configure a service with no concept of state, so NetworkManager marks the end of running a similar series of shell scripts to configure the network.
The Red Hat Enterprise Linux releases don’t line up precisely but this should give an idea of the accelerated development over time, especially when compared to the NEWS file in the source repository.
|Fedora Release||RHEL Release||NetworkManager Version|
So what’s the major changes in the last few Fedora releases? Why should you now pay attention to NetworkManager and not remove it on your Server instances?
Keeping out of the way
There are situations where a running daemon or any risk of dynamic behaviour is not desired. Typically this has been where the legacy network service has been preferred because of the nature of one time fire and forget shell scripts. A few Fedora releases ago at NM 1.0 the ability to just act once and then hide was added.
cat > /etc/NetworkManager/conf.d/c-and-q.conf <<EOF [main] configure-and-quit=true EOF
On IRC there’s often questioning about what is valid in the ifcfg-* files when using NetworkManager. There’s been a strong effort to improve this documentation locally on the system. The man pages of most interest are:
- The overall NetworkManager settings names that are valid for all types of interfaces. These are used on DBus to change NM behaviour or with the
CLI command to configure the interfaces.
- The mapping of NM configuration options to how they are named in the ifcfg-* files. The terms used in the ifcfg-* files are not always the same as the property names on an interface so this is very useful to reference in a normal Red Hat environment of using those files.
- A useful reference of many nmcli activities and this also includes the polkit policy information.
- Configuration for the actual daemon itself and for lower level or default behaviours.
IPv6 is here! Pay attention to it!
As the doomsday clock for IPv4 continues to count down there are more and more networks enabling IPv6 connectivity. There’s been a lot of changes related to IPv6 in the last few releases. These are mostly bugfixes (for example an interface with only an automatic link-local fe80:: address is not considered connected) but the changes focusing on privacy are important to be aware of.
There’s been much concern about tracking of the MAC address of a system due to EUI64 encoding, which uses the MAC to automatically create an IPv6 address. There’s been two approaches to handle this from slightly different angles. First there was RFC4941 which involves automatically generating an address and using it for a period of time before generating a new address and so on. This can be configured in NM on a connection by the property
but it’s better to handle it at the kernel level with
to prefer using a temporary address, so that everything is aware of the configuration. NetworkManager will respect the sysctl setting by default. The sysctl property to set the lifetime this temporary address is valid for is
The downside of this is that anything which uses the IP to carry out session tracking (secure cookies could be an example here) risks losing the session with the change of IP.
The second approach to handle privacy (which can be used alongside the temporary addressing) is RFC7217 which allows for a random, but stable, IPv6 address on a connection. This works by creating a secret key for the system from /dev/urandom and then using the connection UUID with that to create the random address, but one which stays the same for the connection. This is now used instead of EUI64 by default, but the predictable address can be used instead in an environment that needs that predictability in SLAAC.
Privacy protection of the system MAC
Particularly with WiFi being a commonplace feature in our society it’s not just the IP layer that has privacy concerns but also the lowest level of the MAC address.
Just as recently as Fedora 24 (and EL7.3) NetworkManager began using a random MAC whilst scanning for access points to use. The default at present is to use whatever the MAC of the interface is (or preserve if it’s been set in advance with a tool like macchanger) however similar to the IP layer it’s now possible to set
on a connection to RANDOM for a totally random MAC each time that connection is activated or STABLE to mimic the IPv6 behaviour of a randomly generated address that stays consistent with a connection.
Bridging, teams and bonding
Previously there were special virtual interface types of
that had to be declared when using a bridge, team or bond. Although the underlying framework supported stacking these together (along with vlan tagging as well) it wasn’t possible to directly create the “stack” via the nmcli utility, as each had an underlying assumption that they had to be on a physical interface (type ethernet).
Recently these special interface types were dropped with just a
connection property being used to arbitrarily stack this stuff together. This is particularly important on a server environment with a host that is used to house multiple guest virtual machines linked to different networks.
Prior to NM 1.2 this was the juggling of nmcli commands needed to bring three interfaces together, with vlans tagged and bridges defined on these for guests to connect to:
nmcli connection add type bond con-name bond0 mode active-backup nmcli connection add type bond-slave ifname eth2 master mybond nmcli connection add type bond-slave ifname ens9 master mybond nmcli connection add type bond-slave ifname ens10 master mybond nmcli connection add type bridge con-name bridge0 ifname bridge0 connection.autoconnect yes ipv4.method manual "10.0.0.1/24" nmcli connection add type bridge con-name bridge60 ifname bridge60 connection.autoconnect yes ipv4.method manual "10.0.60.1/24" nmcli connection add type bridge con-name bridge100 ifname bridge100 connection.autoconnect yes ipv4.method manual "10.0.100.1/24" nmcli connection add type vlan con-name vlan-60 dev bond0 id 60 nmcli connection add type vlan con-name vlan-100 dev bond0 id 100 nmcli connection down bond0 nmcli connection down vlan-60 nmcli connection down vlan-100 nmcli connection modify bond0 connection.master bridge0 connection.slave-type bridge nmcli connection modify vlan-60 connection.master bridge60 connection.slave-type bridge nmcli connection modify vlan-100 connection.master bridge100 connection.slave-type bridge nmcli connection up bond-slave-eth2 nmcli connection up bond-slave-ens9 nmcli connection up bond-slave-ens10 nmcli connection up bond0 nmcli connection up bridge0 nmcli connection up vlan-60 nmcli connection up bridge60 nmcli connection up vlan-100 nmcli connection up bridge100
With the change to allow arbitrary layering and the removal the the special *-slave type this has simply becomes:
nmcli c add type bridge ifname bridge0 con-name bridge0 connection.autoconnect yes ipv4.method manual ipv4.addr "10.0.0.2/24" nmcli c add type bridge ifname bridge60 con-name bridge60 connection.autoconnect yes ipv4.method manual ipv4.addr "10.0.60.2/24" nmcli c add type bridge ifname bridge100 con-name bridge100 connection.autoconnect yes ipv4.method manual ipv4.addr "10.0.100.2/24" nmcli c add type vlan con-name vlan-100 dev bond0 id 100 master bridge100 connection.autoconnect yes nmcli c add type vlan con-name vlan-60 dev bond0 id 60 master bridge60 connection.autoconnect yes nmcli c add type bond ifname bond0 con-name bond0 connection.autoconnect yes master bridge0 bond.options mode=active-backup nmcli c add type ethernet ifname eth1 con-name eth1 master bond0 connection.autoconnect yes nmcli c add type ethernet ifname eth2 con-name eth2 master bond0 connection.autoconnect yes nmcli c add type ethernet ifname eth3 con-name eth3 master bond0 connection.autoconnect yes
This release also brought the ability to manage many more types of devices such as macvlan, vxlan and tunnels.
So, what’s next?
Somewhat surprisingly the current Fedora release matches the Red Hat Enterprise Linux 7 version of NetworkManager for the moment, but Fedora 26 is right around the corner.
This will bring a jump all the way up to 1.8.0, which of course gets the 1.6.0 improvements in the process.
It’s worth checking out the developer blogs for 1.6.0 and 1.8.0 but some of the key things to look out for are MACsec support for networks that require Layer2 encryption, IPv6 connection sharing (which uses Prefix Delegation) and better handling of restarts of the NetworkManager service.
In researching this article there’s only a couple of areas left that I can see need the legacy network service.
- Use of openvswitch
- Openstack deployments
Outside of this there’s no reason to disable NetworkManager any longer.
As for the future? Well there’s always work to do!
What is the typeface used on the wallpaper, if I may ask?
The font for the first line is Grand Hotel. The other font is Montserrat Extra Bold.
I liked your point about the fragility of the network shell scripts. They’re designed for a server with one or two interfaces that never change after initial boot. That doesn’t work for a workstation where you can plug in and unplug USB Wi-Fi devices.
Heck even on a server there’s many cases that they “don’t work” … like changing bond/bridge/vlan names and members etc.
First, what was the reason behind EUI-64 implementation in the first place, since it don’t provide any security.
Second, right now, what is being used in Fedora? The RFC7217 ?
After an: ifconfig enp2s0 i get a global unicast, and a link-local, both with out any sign of EUI-64 (no FFFE present).
I guess the global unicast is being generated with SLAAC and random interface ID.
The point of EUI64 in the first place was that the because “that was what is used for SLAAC” as I understand it …
When the RFC7217 was finalized and so on they shifted to that instead to give a stable, but private, address.
The way the address is calculated is a random secret key is generated by grabbing some data from /dev/urandom and storing it at /var/lib/NetworkManager/secret_key
Then an algorithm is used with that an the UUID of the connection (by default but you can set stable-id to something else instead of UUID) to actually generate the IPv6 address, but ensure it doesn’t change on each activation of the connection.
João Carlos Mendes Luís
Some people miss resolvconf funcionality, specially for adding ndots option. Is the only alternative add a manual dispatcher, like proposed here?
Could this be a sysconfig or other config option?
man NetworkManager.conf states:
Set the resolv.conf management mode. The default value depends on NetworkManager build
options, and this version of NetworkManager was build with a default of “symlink”.
Regardless of this setting, NetworkManager will always write resolv.conf to its runtime
state directory /var/run/NetworkManager/resolv.conf.
Have you tried doing this?
cat > /etc/NetworkManager/conf.d/leave-resolv.conf <<EOF
There’s also an option for using resolvconf and netconfig to set the resolv.conf according to the man page?
There is an easier way to add custom options in resolv.conf:
nmcli connection modify ipv4.dns-options ndots:2
sorry, it was:
nmcli connection modify YOUR_CONNECTION_NAME ipv4.dns-options ndots:2
What about NM’s integration into systemd-networkd config files?
Aren’t ifcfg-* config files going to be deprecated anytime soon? Thanks
NetworkManager and systemd-resolved are two different technologies, with slightly differing scopes and goals, to achieve similar ends.
I haven’t seen any discussion about trying to parse any systemd .network files … though I imagine a similar parser to the one used for the ifcfg- files could be implemented … but I do wonder what the gain of that would be over just using systemd-networkd in that case.
Thanks a lot. My humble opinion is “just using systemd-networkd” is already a gain “per se”.
I’ve often wondered why ifcfg-* files are the recommended way to configure NM in Fedora still. Should there not be a push to using native NM config files via the keyfile plugin as default, and to deprecate the ifcfg-rh plugin ?
I suppose it’s just for familiarity for the large part, and so that someone could/can just switch to the legacy network service at need and have the vast majority of things “just work”.
There’s enough of an outcry at the suggestion of trying to remove the remaining package dependencies on net-tools, I don’t think I want to consider the response to ending ifcfg-*
In addition to nmcli, there’s nmtui, a text user interface for those of us that run servers w/o X11.
That was 99% of the reason I removed NetworkManager from my servers. Not everyone develops and runs on a laptop and I wish the people working on Ubuntu and Redhat had realized that sooner.
nmtui is alright … but personally I prefer as an interface at the terminal nmcli con edit
Fortunately these days there’s no reason to care about any sort of UI other than a CLI now for NM 🙂
Can you point out why NM would not be used in OpenStack deployments?
The documentation specifically says: “OpenStack Networking does not work on systems that have the Network Manager service enabled. ”
Many thanks for the comprehensive overview! Personally I find the GUI based NM applet very helpful to create more complex setups (e.g. bridge for connecting virtual machines to the physical LAN).
Just one question: What exactly is “IPv6 connection sharing” here? I noticed this term to be used by a major electronics manufacturer for some of their products, but could not find any details.
In the context of the 1.6.0 feature …
Prefix Delegation allows a device to request network space from the router (remember your router should get provided more than a single network’s worth of IPv6 space … my ISP at home provides /56 for instance) and to be given a subnet of it’s own (normally /64) and have this routed to it.
This eliminates the need for NAT like IPv4 sharing uses and instead just routes the data to the it, so it’d be on it’s own subnet without NAT.
What I sometimes miss in NM is the ‘quick’ configuration. “Hmm, I need to configure this switch, operating on 10.12.32.5/16, which is outside my subnet. Let me quickly ‘ip addr add’ etc. Few seconds later, config gone. NM has removed it…
Huh I didn’t realise it’d remove an address like that, honestly hadn’t noticed it doing so.
I haven’t directly tested it but you might want to try this for “quick” configuration:
modify +ipv4.addr “10.12.32.5/16”
Ahhh handy. It does that, I was combatting it with the ‘watch’ tool, to add the address every second.
The –temporary flag to the modify command suits my needs. So when returning to a proper routed network the IP goes away.
I will give this a try!
Good write up, In 2017 I still have to disable IPv6 as my ISP (who provide most the infrastructure for the UK) haven’t upgraded their regional internet hubs to support it yet. Given it is being used a lot more it seems odd.
The router they supplied is IPv6 compliant but checks for an IPv6 connection to the internet hub in Manchester and cannot find it so disables the feature.
Wheel reinvented again … reminds me a lot of Microsoft 🙂
So where’s the resources on how to train up on the latest and greatest way of doing things? My head is reeling.
One point to consider is that NetworkManager exposes a DBUS interface, allowing other applications to have a centralized configuration point: e.g., Cockpit leverages NetworkManager to confgure networking.
It allows users to do network configuration via desktop applets, and enables many advanced features via command line (nmcli) that could allow to easily customize the configuration (think about the dispatcher scripts functionality, that allows you to react to networking events with a simple shell script).
All of this is managed in a centralized way, so that all “clients” access network configuration in a coordinated way.
Morevoer, NetworkManager tries to detect external configuration (e.g., ip address add on a device not yet configured) and preserve it, creating a tracking in-memory connection for that.
Doesn’t all of this sound great? 😉
Anyway, this does not mean that to use NetworkManager you need to know and master all the 101 features it provides.
Just give a try to:
$> nmcli device
$> nmcli connection
$> nmcli connection show YOUR_CONNECTION
$> nmcli connection modify YOUR_CONNECTION OPTION VALUE
These four commands allow me to do the 99% of the daily networking tasks I need.
This would be enough to leverage and appreciate NetworkManager.
And if you want to manually configure an interface without having NetworkManager interfering you don’t need to shut it off… just do a:
$> nmcli device set YOUR_DEVICE managed no