I know that VMWare ESXI/vSphere allows setting a flag to hide virtualization detection. I need to install NVidia drivers in a VM that do not run when virtualization is detected. This does not violate any end user agreements and hence I wonder whether this is now possible with Hyper-V?
-
What is the actual problem you are trying to solve? There is a whole product line of special GPUs with hypervisor integration, like NVIDIA GRID. – John Mahowald Oct 29 '17 at 00:27
-
I am trying to pass through GPUs to VM clients. Particularly non enterprise NVIDIA cards, such as the GTX series. It works perfectly fine with KVM in Linux but Hyper-V stubbornly refuses to pass through consumer NVIDIA cards. – Matt Oct 30 '17 at 03:15
-
@John Mahowald, the problem I have is to want to install an NVIDIA video card driver for a consumer card (GTX980ti or 1080ti) inside a vm instance. It works perfectly fine in a non virtualized environment but NVIDIA blocks via software flag the install inside a vm. Some virtualization platforms allow setting a cpu flag to hide the virtualization environment from driver installers, I look for a way to do the same within hyper-v. It seems it is not possible as of now? – Matt Dec 08 '17 at 04:14
-
@MatthiasWolf, did you ever succeed? If yes, please share some information. – theateist Apr 02 '21 at 03:25
-
@theateist, no, unfortunately not, I aborted running deep neural network training within a vm instance. – Matt Apr 03 '21 at 00:53
1 Answers
Configure the hypervisor to share the GPU. For Hyper-V: Set up and configure RemoteFX vGPU for Remote Desktop Services. Check in Hyper-V manager that the appropriate GPUs have selected Use this GPU with RemoteFX. This requires a suitable graphics driver for the host. It also has RDS licensing implications that I'm not qualified to comment on.
A different method is to dedicate the device to the guest. Again Hyper-V: Plan for Deploying Devices using Discrete Device Assignment. This one has its own requirements including:
Discrete Device Assignment requires server class hardware that is capable of granting the operating system control over configuring the PCIe fabric (Native PCI Express Control). In addition, the PCIe Root Complex has to support "Access Control Services" or ACS which enables Hyper-V to force all PCIe traffic through the I/O MMU.
If the driver is telling you that it is refusing to run in a guest, despite it having direct assignment of the GPU, take it up with the author (NVidia).

- 32,050
- 2
- 19
- 34
-
That does not at all even tangent the question. For what it's worth I passed through my gpu. The driver checks the CPU flag. I stated I am aware Nvidia does that. I asked whether I can set the CPU flag with hyper v as I can with ESXi and other hypervisors. – Matt Dec 09 '17 at 08:34
-
-
Also... This answer doesn't qualify as an answer by any means... – FreeSoftwareServers Feb 26 '20 at 04:58
-
@FreeSoftwareServers, that is why I did not mark it as an answer. And no, I aborted the attempt, I now train deep neural networks on a local machine instance and/or at google/amazon's gpu instances. – Matt Apr 03 '21 at 00:55