0

I have an AVFrame from FFMPEG in YUV format. I would like to render it using qt derived class from QOpenGLWidget and QOpenGLFunctions.

I'm a beginner to qt and OpenGL.

Can someone help out with this?

Thanks Aswin

2 Answers2

4

Well, actually, if you'll need to implement a really fast-rendering player, you'll have to mess with buffer optimizations, off-screen rendering, buffer streaming -- something from those. But as far as you are new with Qt, there are simple, yet working solutions:

  1. Try AV_PIX_FMT_RGBA pixel format, rendering a simple texture into some drawing surface rectangle is simple enough.
  2. When I wanted to try the same myself, found this awesome guy, who implemented full working example here.

QFFmpegGLWidget class in the link above is sufficient for you to get the idea. Converting to RGB is done via fragment shader, good old trick with 3 textures applied with GL_LUMINANCE works there.

I struggled with my own, almost similar solution, nevertheless got picture from camera (rtsp) with wrong, messed colors. So, make sure, you are getting frames in YUV420p. In case other pixel format suits you better, check these chineese resources to get it deeper (google translation makes 'em readable):

  1. YUV shader conversions
  2. YUV 2 RGB math

Good luck!

MasterAler
  • 1,614
  • 3
  • 23
  • 35
  • Did you mean that using QFFmpegGLWidget is not a really fast rendering player? I thought it was the most optimized way to do it. Is there a better way? – PPP Feb 15 '19 at 04:49
  • @LucasZanella On the contary, I meant that for most common cases it's the best way, sufficient by far. AFAIK, if someone's going to mess with additional, even _heavy_ optimizations, he could try using hardware accelaration (via fresh FFMPEG, of course, pre-builf manually, for example) and it could produce frames in some format other than YUV (ex., NV21, not much different though, or other). Also, some sophisticated "buffer streaming" techniques are possible to improve OpenGL performance even more -- I've not tried to implement them myself, but they are mentioned even in OpenGL manuals. – MasterAler Feb 15 '19 at 23:04
  • @MasterAler Hey thanks for this answer. It seems that QFFmpegVideoDecoder.cpp:218 calls sws_scale to first convert AVFrame to YUV420P and then use shader for further conversion. Is there a way to skip sws_scale and directly do everything in shader? – Bo Li Nov 06 '20 at 03:50
  • @BoLi Yepp, definitely. Plz, check my humble sketch: https://github.com/MasterAler/SampleYUVRenderer This renders YUV (a not-so-obvious shader required), the most common output of the decoder, but if you need an RGB rendered, I can add it as well (it's quite simple a shader, though). – MasterAler Nov 06 '20 at 22:34
0

For these cases, I wrote a demo using a custom QQuickItem. YUV data is uploaded to the GPU and converted with a OpenGL shader. This is the repo with the code: https://github.com/carlonluca/qtyuv.

Luca Carlon
  • 9,546
  • 13
  • 59
  • 91