Setting up a Windows VM with GPU passthrough

Setting up a Windows VM with GPU passthrough

Windows VM with PCI passthrough setup for the GPU


I’ve finally managed to setup a Windows VM with direct access to my graphics card and my network interface card (NIC)!

I set myself this goal way back when I was picking components for my PC (see the CPU section), when I got the idea while reading up on VT-d and IOMMU. Back then, these guides were making waves.

This Youtube video by blu3bird84 nicely illustrates the vision of having a Linux machine that can quickly spin up a Windows VM with near-native graphics performance:

The story so far

I actually tried this in the past already, but back then – after compiling kernels & qemu etc.. – I hit a driver packaging conflict between the Nvidia and Intel drivers. Now it’s possible to install the Nvidia drivers together with the Intel drivers (the former with and the latter without OpenGL support), so I came back to this project.

The setup

I mostly followed the instructions on the Arch Linux PCI passthrough via OVMF wiki page and correlated it with information on the VFIO tips and tricks blog (this seems to be the place where people migrated after the massive thread linked to earlier got closed).

I wanted to go with a pure UEFI setup, thus avoiding the need for the often-referenced VGA arbiter patch. In practice, I needed to do the following:

  • Install the Intel Graphics driver, but without mesa and mesa-libgl.
  • Setup the hypervisor as explained on the wiki page and in the blog’s first few parts of the “VFIO GPU How To series”
  • Ensure that my GPU’s vBIOS supports the UEFI Graphics Output Protocol (GOP). As I had an MSI card with a vBIOS that did not support the UEFI GOP out of the box, I had to request a UEFI GOP firmware on their forums.
  • Disable the UEFI Compatibility Support Module (CSM), or at least the bits for the graphics card
  • Setup the VM as explained in the VFIO GPU How To series, part 4.
  • Use the Windows 10 tech preview as guest OS. Windows 7 and Windows Server 2008r2 fail to boot because they require the UEFI graphics CSM to be enabled 🙁

Note that I am no longer required to compile custom kernels or qemu!

I also setup PCI passthrough for an extra Intel PCI-express 1G NIC I acquired in preparation for this; this pretty much worked out of the box: I setup the vfio driver to take control of the NIC at boot time similar to how these instructions do it for the graphics card, and then use the virt-manager GUI to add the PCI host device representing the NIC to the VM.

[Update 24 Jan 2016] I only managed to get PCI-passthrough to work using the i440fx chipset, using the Q35 chipset resulted in the devices not starting up in Windows (code 10 in the device manager if I remember correctly).

The result

Here’s my PC showing qemu/libvirtd running a Windows VM controlling the left screen via my Nvidia 760GTX, while the right screen is controlled by my onboard Intel i915, running the Linux host.

Windows VM with PCI passthrough setup for the GPU

Windows VM with PCI passthrough setup for the GPU

Preliminary benchmarks

[Update: section added 21/09/2015]

3DMark 11

native: P8034

VM: P7655 P7673

[Update 10 October 2015: re-ran benchmark with CPU model set to “host-passthrough”, and manually set topology to 1/4/1 – now 3DMark no longer complains about a dodgy CPU]

This puts the VM performance in 3DMark 11 at 95% of the native performance. Note that this isn’t a perfect comparison, since the OS is different, the VM got 2GB less RAM and I left my browser running in Linux. I also had to manually set the CPU topology in qemu to match what ‘sudo virsh capabilities’ reported.

Remaining challenges & next steps

I’ve shown that the basic technologies work on my machine. I’m well on my way to having a gaming-capable Windows VM. However, I haven’t quite reached an agreeable end state yet – here’s what I still want to achieve before using this permanently on my desktop PC:

  • I’d like to use my second SSD as the storage backend for the VM, ideally in such a way that I can boot the system installed on that SSD natively and inside a VM (like you can on a Mac using VMWare Fusion and bootcamp). Currently I just created a Windows 10 VM stored in a raw file on my primary SSD.
  • I have to switch UEFI CSM off to boot Linux, and on to boot my Windows Server 2008r2 (setup so that it’s basically Windows 7) partition. I think if I uninstall the Intel driver this issue should go away again as well, but that would be going in the wrong direction! :). To resolve this issue, I’m likely to upgrade to Windows 10, which has better UEFI support.
  • Once the above two items are done, I’ll have enough space to install some benchmark apps and evaluate just how good or bad this setup is. Then I can measure and optimise it further, as required. I can run more extensive benchmark tests. [Update 21/09/2015: added preliminary benchmarks section]

Until next time!

Comments ( 2 )

  1. ReplyJohn Call
    Great article, thanks for sharing! I'd be especially interested in any followup successes you have in the area of booting natively from the SSD, in addition to booting as a VM. I'm pursuing this myself... Thanks, John