0

I implemented an opengl-es application running on mali-400 gpu. I grab the 1280x960 RGB buffer from camera and render on gpu using glTexImage2D.

However the glTexImage2D call takes around 25 milliseconds for 1280x960 resolution frame. It does extra memcopy of pCameraBuffer.

1) Is there any way to improve the performance of glTexImage2D? 2) Will FBO help? how can I use Frame Buffer Objects to render. I found few FBO examples, but I see that these examples pass NULL to glTexImage2d in last argument (data). so how can I render pCameraBuffer with FBO?

below is the code running for each camera frame.

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCENE_WIDTH, SCENE_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, pCameraBuffer);

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDeleteTextures(1, &textureID);
Rabbid76
  • 202,892
  • 27
  • 131
  • 174
Sami
  • 311
  • 2
  • 8
  • [`glTexImage2D`](https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml) creates a new texture image. [`glTexSubImage2D`](https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexSubImage2D.xhtml) updates the data of an exsting texture image and is much faster. Create the texture image once by `glTexImage2D`, but use `glTexSubImage2D` to change it's content. – Rabbid76 Jul 02 '19 at 05:13
  • What operating system are you running on? Many platforms allow direct import and use of camera buffers, so you avoid the need to allocate new memory and the copy to populate it, but the mechanism here is OS specific. – solidpixel Jul 02 '19 at 16:53
  • @solidpixel I am running on Linux os and xilinx zynq mpsoc with mali 400 GPU. – Sami Jul 03 '19 at 03:16
  • @Rabbid76 I tried that but the performance is same – Sami Jul 03 '19 at 03:17

1 Answers1

0

The usual approach to this type of thing is to try and import the camera buffer directly into the graphics driver, avoiding the need for any memory allocation or copy at all. Whether this is supported depends on a lot on the platform integration, and the capabilities of the drivers in the system.

For Linux systems, which is what you indicate you are using, the route is via the EGL_EXT_image_dma_buf_import extension. You need a camera driver which creates a surface backed by dma_buf managed memory, and a side-channel to get the dma_buf file handle into the application running the graphics operations. You can then turn this into an EGLImage using the extension above.

solidpixel
  • 10,688
  • 1
  • 20
  • 33
  • I am using Basler daA1280-54uc camera. This camera has a pylon software and provides an API to retrieve the rgb frame into a buffer. Can I route that buffer to EGL_EXT_image_dma_buf_import extension? Note: This camera doesn't create any /dev/video0 node but it is possible to grab the frame using pylon software provided by the camera. – Sami Jul 05 '19 at 04:24
  • Unlikely - sounds like a proprietary layer above the driver layer. – solidpixel Jul 05 '19 at 19:02