My quest to setup a perfect VM to replace my native Windows installation continues…
Previously, I selected appropriate components back when I built my PC. More recently, I managed to setup a VM with GPU and NIC passthrough. Now it was time for the disk: I’d like to to use a dedicated SSD for my Windows installation. The interesting question is, can I install Windows on this dedicated HDD in such a way that I can run it natively as well as a VM? That would allow me to squeeze out the remaining percentages of performance if I absolutely have to.
If you choose to follow the outlined procedures, you do so at your own risk. Keep your data safe. Ensure you don’t invalidate licenses.
People have definitely used physical disks for VMs. People have even done this with Windows 7 partitions (!) before, using linear RAID to stitch together a file and the physical disk partitions. That article contains disclaimers worth reading. Parallels on OSX actually offers this kind of functionality out of the box, and I’ve actually used that in the past. But using a full physical disk should surely make this significantly easier…
I’ll be using the Windows Server 2016 Technical Preview 4 for this; Windows 10 should be very similar. Windows 7 can’t boot without the Unified Extensible Firmware Interface (UEFI) Compatibility Support Module (CSM) setup for the GPU, so I would use a later version than Windows 7. I disabled the UEFI CSM all together in my UEFI settings when I setup PCI-passthrough for my graphics card.
You need to ensure that the VirtIO storage drivers (viostor) end up installed in such a way, that you can boot Windows from a VirtIO disk. I ended up installing Windows in the VM environment, since this way I was guaranteed to have them.
I used the VM I setup as part of the my Windows with GPU passthrough post, except I swapped out the disk to reference my physical HDD (ensure nothing off that disk is mounted!):
sudo virsh-edit <name of VM>
Replace the disk section with something like this:
<disk type='block' device='disk'> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> </disk>
The address etc parameters will be auto-generated.
I also discovered that I could simplify the setup procedure a little by just using one IDE disk drive. When you install the drivers, just disconnect the Windows installation DVD, and replace it with the VirtIO driver disk.
Now is a good time to install Windows. Once you’re done with the reboots, and have shut it down from the GUI by hand, check that you can boot it from the HDD.
Aside: Installing Windows natively and manually installing the drivers later resulted in the VM failing to boot with a INACCESSIBLE_BOOT_DEVICE Blue Screen. The reason for this is that the VirtIO drivers are not available early enough in the boot process. This problem is similar to moving a Windows installation from one RAID controller to another, but the suggestion of plugging both in at the same time to do the driver install doesn’t work (unless somebody can figure out how to emulate a VirtIO device in a native (non-VM) Windows environment?). If anyone knows how to add boot disk disk drivers after the Windows installation, I’d be very interested in reading a comment about that.
Update: I have also encountered the INACCESSIBLE_BOOT_DEVICE Blue Screen when switching from a QEMU i440fx to a new VM config with a q35 system. Updating the PCI config (bus, slot, function) to match the old i440fx config seemed to resolve the issue. I wonder if Windows identifies the boot device using those? But then the PCI location is unlikely to match my native setup, so I’m not sure why switching between native and VM works, but switching between VM configs doesn’t.
Here are some CrystalDiskMark benchmarks on an old 7200RPM spinny disk, comparing read/write speeds between the installation running natively against it running in a VM.
In these screenshots, it looks like the VM benchmarks actually outperform the native installation in every metric, which is somewhat suspicious. Perhaps my drive is getting old, or I need to install some drivers to improve the native performance. I should also test this with an SSD when I get around to doing this for my “production” setup. Either way – the current data suggests that the performance hit from using a VM should not be too bad.
Setting up the tech for passing a physical disk through to a libvirt VM is really not too difficult – just edit the xml file, reference your existing disk, and start the VM. Care must be taken with the Windows installation to ensure that the virtual disk drive drivers end up being installed in a way that they are usable early on in the boot sequence. While the preliminary benchmarks are somewhat suspicious, at least they don’t show a massive performance hit.