0

My application (QT/OpenGL) needs to upload, at 25fps, a bunch of videos from IP camaras, and then process it applying:

  1. for each videos, a demosaic filter, sharpening filter, LUT and distortion docrretion.
  2. Then i need to render in opengl (texture projection, etc..) picking one or more frames processed earlier
  3. Then I need to show the result to some widgets (QGLWidget) and read the pixels to write into a movie file.

I try to understand the pros and cons of PBO and FBO, and i picture the following architecture which i want to validate with your help:

  • I create One thread per video, to capture in a buffer (array of images). There is one buffer for video.
  • I create a Upload-filter-render thread which aims to: a) upload the frames to GPU, b) apply the filter into GPU, c) apply the composition and render to a texture
  • I let the GUI thread to render in my widget the texture created in the previous step.

For the Upload-Frames-to-GPU process, i guess the best way is to use PBO (maybe two PBOS) for each video, to load asynchronously the frames.

For the Apply-Filter-info-GPU, i want to use FBO which seems the best to do render-to-texture. I will first bind the texture uploaded by the PBO, and then render to another texture, the filtered image. I am not sure to use only one FBO and change the binding texture input and binding texture target according the video upload, or use as many FBOS, as videos to upload.

Finally, in order to show the result into a widget, i use the final texture rendered by the FBO. For writing into a movie file, i use PBO to copy back asynchronously the pixels from GPU to CPU.

Does it seem correct?

genpfault
  • 51,148
  • 11
  • 85
  • 139

0 Answers0