Sometimes there are just too many ways to skin a cat. I've been investigating all of the various ways you can get network access to a guest OS on Centos 8.1 / qemu-kvm, and it's extremely confusing.
Categorically, there are three ways:
1) a virtual NAT interface (provisioned automagically in the base install)
2) a host bridge or virtual switch
3) Direct hardware access (a la SR-IOV)
Let's take a look at each:
1) NAT Interface
When you fire up kvm for the first time and haven't done any other setup, you are prompted to load a NAT interface as the default option. This is just fine if your application is a user workstation. Just fire up a VM, load Windows 10 (or whatever), and your guest operating system will have access to the Internet. If, however, our application is a server, this is not ideal as we would have to set up a port-map for every server application we want to run from the host OS to the guest OS. For that reason, we want to investigate some other options.
2) Host Bridge / Switch
The fastest way to get your servers up and running is to use the built-in bridge functionality. The built-in bridge functionality is simple to configure, and also quite fast. It is, however, not the only option. You can also install a 'lite-version' of Open vSwitch as a plug-in to NetworkManager, or you can do full-blown kernel-based version of Open vSwitch. There are probably other 3rd party switches you can install, but OVS seems to be the main one.
The difference between the built-in bridge and going 3rd party is fairly subtle at pure network functionality level. vSwitch enables LACP (port bonding), VLANs (802.1q) and similar features without relying on the host operating system, which is kind of cool, but I can also do all of those things in the guest OS natively. It's unclear to me if there is any real advantage to using the OVS plug-in for NetworkManager over taking other approaches.
If, however, you do a full-blown kernel-level install, you get a lot of additional goodies.
Switch Telemetry Data (Netflow, Sflow, IPFIX) - now you can see traffic moving between VM's that never hits your external switch
Instrumentation and Automation - You can write functions directly to the API
State and logical configuration management - makes it easier to moveVM's and track movement
Integration with open source cloud control platforms - vSwitch has been integrated with Openstack, OpenNebula, oVirt and oQMS.
OpenFlow - particularly useful if you are running open vSwitch natively on a network appliance, you can set up your own routing/switching policies and push them to your OVS platform.
This all sounds like a great learning opportunity, so we'll add Open vSwitch to our project list and document the process in a future blog.
2a) setting up a host bridge
A couple of notes on how the built-in bridges work. You will delete your ethernet interface, add a bridge interface, configure the bridge interface with an IP address, and then slave your ethernet interface to the bridge. It will then show up as a usable interface when you create VM's. You can use the interface in as many VM's as you want, but you can only slave a physical interface to one bridge. You can then set up multiple virtual NIC's in your VM using that one bridge, if that's something you want to do. Another worthy point is that your host OS can communicate directly with any of the virtual machines you create, which is not the case with the next methods we will discuss.
The example below uses the NetworkManager service that has been default in the last few major releases of Ubuntu, and was also the default in CentOS 8.1. I've decided I hate it (see my other blog entry on how I am setting up servers in the future, which begins with un-installing NetworkManager!)
First, create the bridge:
# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
Assuming that you need to use your ethernet interface for connectivity, then you probably want to configure an IP address on it (if this is your second or third connection, maybe you don't). Configure the bridge interface with your static IP address info (obviously replacing my example below with your desired info):
# nmcli conn modify br0 ipv4.addresses 192.168.86.10 ipv4.method manual # nmcli conn modify br0 ipv4.gateway 192.168.86.1 # nmcli conn modify br0 ipv4.dns 192.168.86.1 +ipv4.dns 126.96.36.199
Next, we need to identify the connection name of the interface that the bridge will connect to:
# nmcli conn show NAME UUID TYPE DEVICE br0 10e5caca-2ae0-4bd1-9fbc-f57731d15bad bridge br0 eno1 99a0ce66-ebd6-3c46-8e7d-424ff0565a08 ethernet eno1 System ens4f0 11d1d159-e438-bc38-bba2-411145f244b4 ethernet ens4f0 wlp7s0 8613e187-e4ed-44b8-97c7-bfaca51c1c3f wifi wlp7s0
The device we want to use is ens4f0, and the connection name is "System ens4f0". Find this information for yourself, delete the ethernet interface, and then slave it to the bridge:
# nmcli conn del "System ens4f0" Connection 'System ens4f0' (11d1d159-e438-bc38-bba2-411145f244b4) successfully deleted. # nmcli conn add type bridge-slave autoconnect yes con-name ens4f0 ifname ens4f0 master br0 Connection 'ens4f0' (2e0970f5-87f6-4cc9-9db4-fe8add051a55) successfully added.
Last, we active the bridge interface:
# nmcli conn up br0 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
And now let's look at the information:
# nmcli conn show NAME UUID TYPE DEVICE br0 10e5caca-2ae0-4bd1-9fbc-f57731d15bad bridge br0 eno1 99a0ce66-ebd6-3c46-8e7d-424ff0565a08 ethernet eno1 wlp7s0 8613e187-e4ed-44b8-97c7-bfaca51c1c3f wifi wlp7s0 ens4f0 2e0970f5-87f6-4cc9-9db4-fe8add051a55 ethernet ens4f0 # bridge link show 4: ens4f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state listening priority 32 cost 100
and the IP info (in this case I didn't set any IP address info, so the IPv4 was received from DHCPv4 and the IPv6 address via SLAAC:
# ip a show dev br0 17: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 90:e2:ba:c5:e8:08 brd ff:ff:ff:ff:ff:ff inet 192.168.86.100/24 brd 192.168.86.255 scope global dynamic noprefixroute br0 valid_lft 85927sec preferred_lft 85927sec inet6 2601:281:8300:ae:71e8:192b:a56e:826b/64 scope global dynamic noprefixroute valid_lft 86388sec preferred_lft 14388sec inet6 fe80::6d18:9a60:8742:388f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3) Direct Hardware Access Using Single root I/O Virtualization (SR-IOV)
First of all, there are many requirements for this to work. CPU support, Motherboard and BIOS support (and make sure it's enabled in BIOS), and NIC support. Make sure you check all these out when spec'ing your machine if you want to go this route.
One of the key features of this approach is supposed to be higher performance. I think that if I was building a router or firewall out of a linux box I would likely go this route, but for my purposes it's not really necessary.
One of the downsides of using this method is that the host can't communicate directly with any of the guests, which is very annoying. I also seem to have to regularly have issues with overall network connectivity when using these interfaces in KVM (they show up as macvtap's), so I have stopped using them entirely.
I'm afraid I can't put much of a to-do walkthrough in here because this blog has been in draft over well over a year now and I'm just going to hit the publish button so the other goodness that's here can be out there for folks to find and use.