0

With standard OpenCV GpuMat constructor, the GPU memory is allocated in "dedicated" (in contrast to "shared") area, which becomes overfilled very fast.
How can I allocate GpuMat in "shared" memory instead, if it is possible?

I am using OpenCV 4.1 with CUDA 10.1, built by myself, on this specific machine, following this instruction: https://docs.opencv.org/master/d3/d52/tutorial_windows_install.html
(the only significant difference is that I needed to change the variable value DWITH_CUDA from OFF to ON to enable CUDA support).

I need to have few stacks of 200 HD images in my GPU memory simultaneously.
My video card has 2GB of dedicated, and 8 GB of shared memory.

If you go to the definition of GpuMat class, you can see the last argument being the "Allocator":

GpuMat(int rows, int cols, int type, GpuMat::Allocator* allocator = GpuMat::defaultAllocator());

The default value of this argument is the return value of defaultAllocator() function. I suppose that supplying a customized version of the Allocator could solve the problem. But I did not manage to find any reasonable information on how to modify it.

  • 2
    The 8 GB of shared memory you are referring to from the windows control panel is not usable by/for CUDA, or anything that uses CUDA, such as OpenCV. You are restricted to the 2GB of physical memory on your GPU (really, less than that on windows/WDDM) for CUDA or anything that uses CUDA. – Robert Crovella May 02 '19 at 22:24
  • Robert Crovella, is there any reference about it? I don't really understand the reason. For example, OpenGL could freely use those 8GB of shared memory. Why OpenCV and CUDA cannot? – Oleksii Doronin May 03 '19 at 00:44
  • 2
    Regarding reference, run `cudaMemGetInfo()`. That is the definition of available memory for CUDA, unless you are in a demand-paged managed memory environment. And on windows WDDM, you are not. I wouldn't be able to respond to the question asking for the reason why. – Robert Crovella May 03 '19 at 03:00

0 Answers0