Setting up a Windows VM to use a graphics card via PCI-passthrough can yield great performance benefits for the VM. However, one loses the flexibility of easily moving the VM’s video output from one monitor to another; in the past, one could just drag the VM’s VirtualBox window from one screen to another and maybe hit the fullscreen button. Now, the monitor that’s being used by a VM is dictated by the cabling between the graphics card and the monitor(s). How can one now use the same monitor at times for Linux, and at other times for the Windows VM? I will discuss two methods for achieving this, and then finish off with some forward-looking closing notes.
Using several connections to one monitor, enabling/disabling monitor
One approach is to connect your preferred monitor to both graphics cards:
Now, when using Linux, only graphics card 1 will be used, and only the Windows VM will use Graphics card 0. When the Windows VM is started, Monitor 0 can be disabled manually using some command line utility, and we rely on the monitor’s automatic input detection to flip over to the Windows VM.
- Simple to implement
- Easier multi-screen X11 configuration
- Disabling a monitor should be possible while X11 is running.
- Need to press butttons on monitors without automatic input detection (my otherwise great 1440p Dell monitor doesn’t have that feature!)
- If you also want to occasionally use your preferred monitor as a secondary display for a laptop, you quickly run out of useful ports that support the full resolution. You probably don’t want to use the VGA ports.
- You lose the ability to use your (most likely better) graphics card in Linux.
- This could mean you can’t use all your monitors, depending on the total resolution, # of video outputs..
- Playing Linux games might no longer be possible or they might suffer from performance issues
Enabling and disabling graphics card in software
Alternatively, we can hook up one graphics card to each monitor:
We can then use both graphics cards in Linux, and when we want to use the Windows VM, disable one card (the one that will be used via PCI-passthrough). However, we need to be a bit careful and ensure that nothing in Linux is using the card that is to be passed through.
Needless to say, this involves some exploring and pushing the envelope, as turning graphics cards on and off isn’t exactly a stock feature on desktop PCs!
Aside 1: Yes, yes, Laptops have some support for disabling GPUs thanks to GPU switching technologies – nvidia Optimus/ATI Hybrid Graphics. I don’t expect many laptop CPU to support VT-d/IOMMU though. If you somehow manage to make the laptop GPU switching tech work together with graphics cards being PCI-passthrough’d to a VM, hats off to you – and please leave a comment!
Aside 2: This setup works just fine when running Windows natively (not in a VM) – using two graphics cards does not seem to be an issue there.
Using the VFIO-PCI driver to enable and disable a graphics card
The actual enabling and disabling of the graphics card can be implemented using a script, which will basically tell whichever driver is currently using the graphics card to stop managing it, and then bind the graphics card to the VFIO-PCI driver. To undo, we can tell the VFIO-PCI driver to stop managing the graphics card, and trigger a PCI bus rescan to let the original driver manage the card again. I use the following script to implement this:
#!/bin/bash #lspci -k ... if [[ $1 == "stop" ]]; then echo "stopping nvidia" # stop nvidia/nouveau driver echo 1 > /sys/bus/pci/devices/0000:01:00.1/remove echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove # see https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci # it might be possible to use .../bind with bus IDs instead echo 10de 1187 | tee /sys/bus/pci/drivers/vfio-pci/new_id echo 10de 0e0a | tee /sys/bus/pci/drivers/vfio-pci/new_id echo "1" > /sys/bus/pci/rescan fi # start vm if [[ $1 == "start" ]]; then echo "starting nvidia/nouveau" echo 10de 1187 | tee /sys/bus/pci/drivers/vfio-pci/remove_id echo 10de 0e0a | tee /sys/bus/pci/drivers/vfio-pci/remove_id # let nvidia take over again echo 1 > /sys/bus/pci/devices/0000:01:00.1/remove echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove echo "1" > /sys/bus/pci/rescan fi
You can use lspci -nn to help you find the magic values:
01:00.0 VGA compatible controller : NVIDIA Corporation GK104 [GeForce GTX 760] [10de:1187] (rev a1) 01:00.1 Audio device : NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
I do not bind the graphics card to the vfio-pci driver at boot using /etc/modprobe.d/vfio.conf.
Controlling graphics cards’ responsibilities in Linux
Next, we need to ensure that no part of Linux uses the graphics card when we want to switch it off. Broadly speaking, this means:
- No kernel messages
- No graphical login
- No X11 (desktop environments etc)
.. on the graphics card.
I boot into a terminal prompt and then use startx to launch whichever desktop environment I feel like using, but if you use kdm/gdm or any other graphical login UI you may need to do something to ensure it (and/or X11) stops using the graphics card you want to prepare for passthrough.
Up until about November 2015, this worked pretty much out of the box with the proprietary nvidia driver; the monitor connected to my on-board Intel i915-driven GPU would display the Linux login prompt, I could run “./gpu.sh stop”, “startx” and then use virt-manager to launch my VM onto the second screen.
However, some driver change then resulted in the nvidia driver always using my nvidia graphics card as the primary graphics card, regardless of which graphics card I selected as primary in my UEFI firmware. As a result, the Linux kernel logs would end up being emitted by the nvidia driver, and trying to tell the kernel to stop using that graphics card so that a VM could take it over wasn’t going to work. To work around this, I switched to the nouveau driver, and added the “fbcon=map:1” parameter to my kernel command line to move the console to the on-board graphics card. [Note: I should probably find a Kernel Mode Setting (KMS) DRM (Direct Rendering Manager) equivalent]
I’m curious to see whether the recent efforts nvidia has been putting into adding KMS support will make any difference.
You will probably want two X11 config files, one for a multi-monitor setup and one for a single-monitor setup. Once you’ve logged in, just put the one representing your desired setup into /etc/X11/xorg.conf.d before running startx.
Intel i915 note: The way I read the error messages, the lack of VGA arbitration support means that I cannot let X11 use my on-board GPU and my nvidia graphics card at the same time (“VGA arbiter: cannot open kernel arbiter, no multi-card support”) :(. This leaves me with two options:
- Try the VGA arbitration patch
- Just use 1 monitor as X11 GUI, and I leave the other to the text-based login. Interestingly enough, this means I can use CTRL+ALT+F2 etc to switch to other terminals while leaving the GUI running, so my second monitor is not entirely useless.
I don’t want to use the VGA arbitration patch, so for now I’ll just use one monitor for Linux. Here’s a starting point for your X11 single graphics card config:
Section "Device" Identifier "Card0" Driver "nouveau" BusID "PCI:1:0:0" # Option "Ignore" "true" EndSection Section "Device" Identifier "Intel Graphics" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" # Device "Intel Graphics" Device "Card0" Monitor "Monitor0" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection
And here’s my attempt at a multi-graphics card config using Xinerama (see Multi head Arch Linux wiki article) – not fully tested due to VGA arbitration issue.
Section "ServerLayout" Identifier "Main" Screen 0 "Big" Screen 1 "Small" RightOf "Big" Option "Xinerama" "1" # enable xinerama EndSection Section "Monitor" Identifier "Dell" EndSection Section "Monitor" Identifier "Samsung" EndSection Section "Device" Identifier "Card0" Driver "nouveau" BusID "PCI:1:0:0" # Option "Ignore" "true" EndSection Section "Device" Identifier "Intel Graphics" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Small" Device "Intel Graphics" Monitor "Samsung" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Big" Device "Card0" Monitor "Dell" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection
Putting it all together
I can now use/script the following workflow with this setup:
- Log into Linux (text based prompt)
- Select monitor setup by copying the appropriate X11 config into /etc/X11/xorg.conf.d.
- Run startx to launch Linux GUI
- To use my VM, log out of the X11 desktop environment to go back to terminal prompt
- Select other monitor setup by copying the appropriate X11 config into /etc/X11/xorg.conf.d.
- stop GPU,
- launch VM
I think it’s actually quite amazing that this works!
While it does address the limitations of the multi-cable, auto-input-detect setup, this approach does have some downsides:
- Can use both graphics cards in Linux!
- Only needs 1 connection per monitor (no need to unplug and move cables to use it as an external display for a laptop)
- Need to log out of desktop environment before starting VM.
- Intel i915: VGA arbitration issue
Conclusions & closing thoughts
In this blog post, I’ve shown how we can avoid reboots and still use one graphics card at times for Linux, and at other times for a Windows VM via PCI-passthrough.
I’m very much looking forward to improvements to the proprietary nvidia driver. If they manage to address the issue I hit, I can play games on Linux (that don’t currently work well with nouveau), use CUDA, and when necessary, use my Windows VM. This thread suggests that nvidia is working on KMS DRM support and fixing the the “age-old Linux libGL.so collision problems”, both of which I’m looking forward to.
It would also be nice if xrandr allowed completely removing a graphics card from X11’s use. Perhaps this is already possible and I just haven’t figured out how to do it? Then I could just add some xrandr commands to my gpu.sh script and remove the need to log out of the graphical environment before turning on my VM.
Please do leave a comment if you have any thoughts, and/or if you found this blog post helpful!