i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing. my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?
-
1I think you'd be lucky if it worked at all under VM. You can do CUDA development on Windows 7. If for some reason that is an issue, if your using developing and not in a production environment why not use the emulation feature? – dangerstat Feb 14 '10 at 11:44
-
i need accurate speed results of my cuda algorithm. it is already working under VM, but i am not sure if i am using the full gpu resources (because of VM). – scatman Feb 14 '10 at 14:42
-
In the VM it should be running in emulation mode, run deviceQuery (from the SDK) or call cudaGetDeviceProperties to check. – Tom Feb 16 '10 at 06:52
-
Why not just boot from an Ubuntu Live CD for Linux testing purposes ? – Paul R Feb 14 '10 at 12:20
-
if i booted Ubuntu from a live CD, i need to install cuda toolkit on every restart!! – scatman Feb 14 '10 at 14:38
-
Buy a 4gb usb stick, install live ubuntu onto that = $15 – Martin Beckett Feb 15 '10 at 17:59
-
@Martin Exactly - it's not rocket surgery... – Paul R Feb 15 '10 at 19:52
5 Answers
I faced a similar task once. What I ended up doing was installing Ubuntu on a 8GB thumb drive with persistent mode enabled.
That gave me 4GB to install CUDA and everything else I needed.
Having a bootable USB stick around can be very useful. I recommend reading this.
Also, this link has some very interesting material if you're looking for other distros.

- 59,234
- 49
- 233
- 358

- 92,053
- 36
- 243
- 426
Unfortunately the virtual machine simulates a graphics device and as such you won't have access to the real GPU. This is because of the way the virtualisation handles multiple VMs accessing the same device - it provides a layer in between to share the real device.
It is possible to get true access to the hardware, but only if you have the right combination of software and hardware, see the SLI Multi-OS site for details.
So you're probably out of luck with the virtualisation route - if you really can't run your app in Windows then you're limited to the following:
- Unrealistic: Install Linux instead
- Unrealistic: Install Linux alongside (not an option)
- Boot into a live CD, you could prepare a disk image with CUDA and mount the image each time
- Setup (or beg/borrow) a separate box with Linux and access it remotely

- 20,852
- 4
- 42
- 54
I just heard a talk at NVIDIA's GPU technology conference by a researcher named Xiaohui Cui (Oak Ridge National Laboratory). Among other things, he described accessing GPUs from Virtual machines using something called gVirtuS. He did not create gVirtuS, but described it as an opensource "virtual cuda" driver. See following link: http://osl.uniparthenope.it/projects/gvirtus/
I have not tried gVirtuS, but sounds like it might do what you want.

- 141
- 1
- 4
-
3As of Nov 30, 2012 the current stable release (v3.2), as well as beta release of the new rCUDA 4 Windows and Linux distributions are not interoperable. In addition, 32 and 64 bit versions are not yet interoperable. This means that a Linux VM running on a Windows host cannot use rCUDA. – vo1stv Jan 17 '13 at 19:13
-
1rCUDA doesn't seem to be open source (the website has a software request form) and doesn't appear to support all linuxes such as arch linux. – simonzack Nov 20 '15 at 09:49
As of CUDA 3.1 it's virtualization capabilities are not vivid, so the only usable approach is to run CUDA programs directly on the target HW+SW

- 998
- 1
- 10
- 22