1

I want to modify an existing OpenGL application to render to a PBO and then read the PBO to generate an encoded video of what was originally going to be rendered to the screen. Since performance is key, I cannot stall the pipeline by doing glReadPixels from the backbuffer as I was doing. I am wondering if there is a simple or straightforward way to redirect everything rendered to the framebuffer and make it instead go to the PBO. In other words, I don't care if it is not shown on the screen. As a matter of fact, I would prefer if nothing is shown to the screen.

cloudraven
  • 2,484
  • 1
  • 24
  • 49
  • 1
    If you want to do this offscreen, you should consider drawing into an FBO and then transferring the contents of an FBO attachment using a PBO. Of course this is even less *straight-forward*, but it should be the preferred technique for offscreen rendering in modern OpenGL. – Andon M. Coleman Sep 05 '13 at 19:16
  • Why? Wouldn't it be faster to skip the FBO? (I actually don't know. It would be great if you could elaborate on the reasoning behind using a FBO then a PBO) – cloudraven Sep 05 '13 at 19:27
  • 1
    If you use an FBO, you can do this independent of the pixel format and resolution of your window. FBOs don't have to be tied to a window, so they are great for offscreen rendering. If anything it should be quicker to draw into an FBO and then copy the contents; there are no front/back buffer swaps to worry about, you can use any resolution you want, you can use MSAA without having to re-create your window, etc. – Andon M. Coleman Sep 05 '13 at 19:49
  • Ok. I got the FBO part now. So then wouldn't it be better to read directly from the FBO to memory. Do I need the PBO at all, would it be enough to then just read from the FBO? I guess I can do preprocessing in a PBO (colorspace conversion perhaps), but then again maybe just using the FBO is good enough. – cloudraven Sep 05 '13 at 19:55
  • 1
    That all depends on what you're trying to accomplish here. I had assumed that you were using the pixels in some algorithm or for some purpose that could not be implemented on the GPU. Colorspace conversion is something that would be trivial to implement in a pixel shader, you could probably do that without even using an FBO. Encoding of video can even be done on the GPU with the right libraries, H.264 is the most widely GPU-implemented video codec and this functionality is available for free on every major platform (Windows, Linux/BSD, OS X). – Andon M. Coleman Sep 05 '13 at 20:01
  • In any case, you want to use PBOs when transferring pixel data CPU<-->GPU because they allow for asynchronous transfer of pixel data. Instead of blocking and waiting for `glReadPixels (...)` to completely finish you can do other things while DMA data transfer goes on in the background. If you need more info, I can turn all this into an answer or we can move these comments to chat. – Andon M. Coleman Sep 05 '13 at 20:13
  • yeah. I am using x264 but that does encoding in the CPU, but I may as well use quicksync or windows media foundation, or a GPU library. I was thinking on doing color conversion on the GPU with a shader as you said, and then hand the YUV version to x264/wmf and encode it. The way I have it now I am reading directly from the back buffer with GLReadPixels, but that part is a significant bottleneck now. The idea then is render to the FBO, do color conversion with a shader and then copy it to system memory, hopefully without stalling the pipeline the way I am doing now with GLReadPixels. – cloudraven Sep 05 '13 at 21:07
  • This was all very helpful. If you want you can turn all that into an answer. I will accept it. – cloudraven Apr 06 '14 at 06:59

0 Answers0