Patch the kernel and QEMU for better compatibility with graphics card / VGA VFIO passthrough
Create and configure a new virtual machine (VM) with real hardware attached to it
Configure CPU pinning on the VM for better gaming performance
Build considerations & preparation
QEMU has several PCI passthrough techniques, the newest of which is VFIO. QEMU's normal PCI passthrough leaves much to be desired whereas VFIO takes full advantage of IOMMU, has better device support and prevents multiple access to the same device (you can read more about it in Alex Williamson's presentation here).
That said, VFIO it is relatively new and experimental technology for the purposes of passing through entire VGA cards to virtual machines. While myself and many others have had tremendous success, different hardware can produce different results and getting there may not always be straightforward.
Likely, until Fedora 21 is released you will need to patch and rebuild both the Linux kernel and QEMU; instructions for doing so will be provided in this tutorial. If you are purchasing hardware, it is also strongly recommended that you read over the KVM VGA-passthrough thread on Arch Forums to confirm that your intended hardware configuration has been reported to work by another user.
Personally, the following hardware has worked wonderfully for me:
Motherboard: Supermicro X10SAE
CPU: Xeon E3-1225v3
Audio: Onboard (Intel C220 HD audio) and AMD R9 270X HDMI
Video: AMD Radeon R9 270X
Network: Intel I210 Gigabit
Anecdotal evidence suggests that for graphics passthrough, nVidia cards seem to fare better than AMD ones. However, success has been had on both sides on a variety of device models dating back several years. I have found that generally, problems are not inherent to the hardware but more a matter of adjusting you software stack (i.e. applying certain patches) to get a compatible passthrough. From my reading nVidia's GeForce 6xx/7xx and AMD's Radeon R9 series seem to work fairly painlessly.
For network cards, always pass through an Intel ethernet controller over a Realtek one if possible.
Common problems with VGA VFIO passthrough
Before getting to the fun part, there are several key pieces to getting a functioning VGA passthrough that need further description. Understanding these issues will be key in creating a functioning host environment for VFIO VGA passthrough.
PCIe device reset
In order to re-initialize a PCIe device, it needs to be reset. Normally the host controls this, however now that we are passing through the device to a VM, some additional work is required to get reset functioning correctly. Without this extra PCIe reset support, the machine typically freezes when starting your VM for a second time.
Fortunately for us, kernel >= 3.12 has this support and simply upgrading the kernel fixes the issue.
VGA arbiter and multiple GPUs
Passing through generic PCI devices with VFIO works pretty well. Graphics card passthrough gets put into its own category called "VGA passthrough" because of the technical challenges involved in presenting a functioning GPU for the virtual machine to initialize without things going awry.
Most computers today come with a GPU built-in to the CPU. This will be a major headache when trying to setup VGA passthrough, as VGA is a really old standard. Back when it was created, having multiple graphics cards on a single system was not a configuration they had foreseen or designed for. VGA calls can only be directed to a single device at a time, so the kernel has to use a VGA arbiter that switches the active device and directs VGA accesses to the appropriate card. I am oversimplifying this a bit, but Alex Williamson has a detailed post explaining the technical issues. In short, the x-vga=on flag passed to VFIO indicates to the VGA arbiter that the VFIO driver will need to participate in VGA arbitration, so everyone stays happy.
The problem is that the Xorg i915 driver for Intel's integrated GPUs does not participate in VGA arbitration, even though the devices claim the VGA address space. This means VGA calls get directed to the wrong card, (a) messing up your display on the host and (b) preventing the graphics card on the VM from functioning correctly. Ugh.
Fortunately, Alex has also written kernel patches to fix this, however be warned that they cripple 3D performance on the Intel GPU. Since we're building a high-performance VM for gaming, I am assuming that will not be an issue for you.
NoSnoop is a feature flag on a PCIe device that allows it to issue transitions that to bypass cache. This can cause consistency problems when passing through the card to a virtual machine.
You can check if you card has NoSnoop enabled by running lspci -vvvvas root and seeing if your graphics card lists NoSnoop+ (enabled) or NoSnoop- (disabled) under the Capabilities > DevCtl section.
Previously this required patches to the kernel, but with kernel 3.15.5 in Fedora, these patches are no longer required.
Rebuilding packages
The first step is to setup a packaging environment and download the upstream source RPMS:
It should have downloaded kernel-[version].fc20.src.rpm in your directory.
Download any of the patches listed above that may be required for your hardware configuration (as plaintext patch/diff files) and save them to ~/rpmbuild/SOURCES.
Rebuilding QEMU
Update 2014-06-09: the newest versions of QEMU in virt-preview (>= 2.0.0) have the NoSnoop patches included! No rebuilding necessary. If you previously followed this guide, remove any QEMU exclusions from yum.conf and update to the latest available version.
Rebuilding Kernel
Download any required patches and save them as plaintext files in your ~/rpmbuild/SOURCES folder. Next, add your PatchXYZ: filename.patch lines where you see other patches being declared. For example: Next, edit ~/rpmbuild/SPECS/kernel.spec and find the lines where existing patches are listed. You should see something like this:
It may list some build dependencies. If so, install them with yum install foo and then run the rpmbuild command again. When the build is complete, it output a list of filenames that you can install. Here's a quick command to do so:
Reboot and your system will be fully patched! I suggest you add a line exclude=kernel* to /etc/yum.conf to prevent for patched packages from being upgraded.
Installing KVM
Because this is all experimental stuff, install fedora-virt-preview to get the latest and greatest software set:
Next, let's be nice to the VMs and give them some time to perform a graceful shutdown before the host powers off:
sed -i 's/#ON_SHUTDOWN=.*/ON_SHUTDOWN=shutdown/' /etc/sysconfig/libvirt-guests systemctl enable libvirt-guests systemctl enable libvirtd
Edit the default kernel boot arguments (specified in GRUB_CMDLINE_LINUX of /etc/default/grub) and add intel_iommu=on for Intel CPUs or iommu=pt iommu=1 for AMD CPUs to turn on IOMMU functionality.
As well, the host initializes certain devices on init (e.g. graphic cards, USB controller, audio chipsets) so these devices need to be manually assigned to the PCI stub driver to prevent the host from using the device during host boot. It allows the VFIO driver to later bind to the devices and pass them to a VM. Add pci-stub.ids=PCI_IDs where PCI_IDs is a comma-separated list of PCI IDs as given by lspci -nn. For example, my GRUB_CMDLINE_LINUX looks like:
The system will now automatically attempt to bind to the devices indicated in /etc/sysconfig/vfio-bind to VFIO at bootup. The format of FULL_PCI_IDs is a little different than earlier, as it is space separated and requires a full bus address prefix as per ls /sys/bus/pci/devices. You can use lspci -nn to identify a device, and then the output from the file listing to identify its full prefix. Here's an example of my configuration:
Edit the variables YOUR_VM_NAME, MEMORY_KB, NUM_CPUS (physical CPUs), NUM_CORES (cores per CPU), NUM_THEADS (threads per core; 1=normal and 2=each virtual core gets a corresponding HyperThreading CPU) to your liking.
On my host, I have a 4 core CPU with hyperthreading (8 logical cores) so assigning the virtual machine 1 CPU, 3 cores and 2 threads (resulting in 6 logical virtual CPUs visible, and reserving 1 physical core + 1 HT core for the host) has worked very well.
The section where the variables DEV_PARTITION_PATH and PATH_TO_LOCAL_FILE appear can be used in conjunction or one at a time, depending on your configuration. A plain image file on your disk can be created with qemu-img or just point it to an unformatted partition (preferred for better performance - it can be a physical partition or a logical RAID/LVM one).
As well, be sure to specify the correct GPU_PCI_ID and other devices IDs for your setup. If you only want to pass through a GPU, then remove the -device and vfio-pci parameters for the other devices.
When you have fully customized the file, import it:
virsh define ~/gaming-vm-sample.xml
You may wish to open virt-manager and copy the host CPU features if you're not running a Haswell CPU. Another tip is for Windows, you will need to download the latest VirtIO driver image and attach it to the machine in order for Windows to detect a disk. Use the Have Disk... option and Browse to the WIN7/AMD64 folder during installation and install the device drivers listed there.
Now set the VM auto-start on boot and get gaming!:
virsh autostart YOUR_VM_NAME
Troubleshooting
If your VM isn't booting, try editing the VM configuration (virsh edit YOUR_VM_NAME) and on the line where you input GPU_PCI_ID, change bus=pcie.0 to bus=root.1,addr=00.1. This exposes the graphics card on a different PCIe port in the VM which may sometimes help.
For nVidia users, recent driver packages apparently are broken without the kvm=off flag to the QEMU's -cpu parameter. Apparently, nVidia checks for the KVM hypervisor's signature and disables their driver if it detects it. It is not clear if this was an intentional change or not, but this is the reality of it.
Be sure to read through the KVM thread on Arch Linux forums that's been linked throughout this howto, as it contains tons of valuable (albeit sparse) information. Another tip would be to get in touch with the fedora-virt mailing list and describe the issue.