Is it possible to run operations, say like c=a+b, without copying the variables to GPU. Say, take reference from host memory without memcopy to device. For GPU with low memory, that would be ideal to use host memory// or maybe even hard disk. Is there any way to do so|
Asked
Active
Viewed 49 times
1
-
As of cuda 6.0 pinned memory is no longer the only option. This should not be marked as a duplicate question due to the release of unified memory in Cuda 6. – Christian Sarofeen Jan 08 '15 at 18:12
-
1Unified memory still copies the data to the GPU. The question explicitly states "without copying the variables to GPU." For a GPU with low memory, unified memory cannot help. Pinned memory can. – Robert Crovella Jan 09 '15 at 14:32
-
but again, i ran some tests and pinned memory did not support any more variables. – Roshan Jan 10 '15 at 04:22
-
I think your tests are likely flawed, in some way, then. You might want to post a question if you are having trouble using pinned memory, and describe those tests. This question is about whether it is possible. Yes it is possible. And unified memory *will not* allow you to use more memory than what is on-board on your GPU (unlike pinned memory, your tests notwithstanding). You may also be interested in the [simple zero-copy](http://docs.nvidia.com/cuda/cuda-samples/index.html#simplezerocopy) cuda sample code. – Robert Crovella Jan 10 '15 at 19:22
-
http://stackoverflow.com/questions/26969534/cuda-pinned-memory-zero-copy-problems The code I tested was a simple modified version of this. I used structures to see how much change is observed in supported number of those structures. and the result was quiet the same. I have 8 GB ram and 1 GB GPU, So i expected different results. – Roshan Jan 12 '15 at 02:04