2

I am using a combination of kivy to do some interactive scientific visualization that leverages NVIDIA's GPUs using pycuda.

I am currently looking at using the GL Interop functionality so that after my CUDA code has modified an array, I can immediately draw that array without going through the slow process of getting the data from the GPU device to the host cpu, and then sending the data back to the GPU to be displayed as an OpenGL texture.

In my attempt to do this, I have been reading through the pycuda interop examples such as the Sobel filter and the more simple Tea Pot example. In both examples, pixel buffer objects (PBOs) are used.

As far as I understand, Kivy does not currently support or use PBOs, and so for simplicity's sake I would prefer to avoid using PBOs and just have my CUDA functions operate directly on OpenGL texture data. Is this possible? Is this a bad idea?

A snippet of my current attempt to register a texture for being accessible to CUDA looks like this:

    self.cuda_access = pycuda.gl.RegisteredImage(texture_id, GL_TEXTURE_2D)
    self.mapping_obj = self.cuda_access.map() 
    self.data, self.sz = self.mapping_obj.device_ptr_and_size()

...but on that last line, I get the following error

pycuda._driver.LogicError: cuGraphicsResourceGetMappedPointer failed: resource not mapped as pointer

...which I am not sure how to persue.

Many thanks in advance.

weemattisnot
  • 889
  • 5
  • 16

0 Answers0