Is there any way to find out, how much memory occupies the kernel code (execution) in gpu (device) memory? If I have 512 MB device memory how can I know how much is available for allocation? Could visual profiler show such info?
Asked
Active
Viewed 440 times
2
-
The kernel _code_, AFAIK, never resides in device data memory. – leftaroundabout Apr 09 '12 at 15:58
-
when a c program is executed this happens in ram. I think it safe to assume that the part of the code that executes on gpu resides there. I don't mean to be rude but I would like a more extended answer. If not in data memory then where? – amanda Apr 09 '12 at 16:04
-
1Actually I don't know, but unlike CPUs, GPUs are clearly rather Harvard architectures than Von Neumann ones. – leftaroundabout Apr 09 '12 at 16:19
1 Answers
1
Program code uses up very little memory. The rest of the CUDA context (local memory, constant memory, printf buffers, heap and stack) uses a lot more. The CUDA runtime API includes the cudeGetMemInfo
call which will return the amount of free memory available to your code. Note that because of fragmentation and page size constraints, you won't be able to allocate every last free byte of memory. The best strategy is to start with the maximum and recursively attempt allocating successively smaller allocations until you get a successful allocation.
You can see a fuller explanation of device memory consumption in my answer to an earlier question along similar lines,