0

I've got a YUV420 pixelbuffer in a UInt8 Array. I need to create a texture out of it in order to render it with OpenGL. In Android there is an easy way to decode my array to an RGB array for the texture. The code is the following:

            BitmapFactory.Options bO = new BitmapFactory.Options();
            bO.inJustDecodeBounds = false;
            bO.inPreferredConfig = Bitmap.Config.RGB_565;
            try {
                myBitmap= BitmapFactory.decodeByteArray( yuvbuffer,
                                                            0,
                                                           yuvbuffer.length,
                                                            bO);

            } catch (Throwable e) {
                // ...
            }

I need to decode the yuv buffer on my ios platform (Xcode 8.3.3, Swift 3.1) in order to put it into the following method as data:

void glTexImage2D(  GLenum target,
                    GLint level,
                    GLint internalFormat,
                    GLsizei width,
                    GLsizei height,
                    GLint border,
                    GLenum format,
                    GLenum type,
                    const GLvoid * data);

How can I achieve this decoding?

ALTERNATIVE:

I've described the way I am decoding the YUV-buffer on Android. Maybe there is an other way to create a texture based on yuvpixels without decoding it like this. I've already tried the following method using the FragmentShader (Link), but it is not working for me. I'm getting a black screen or a green screen, but the image is never rendered. There are also some methods using two seperate buffers for Y and for UV - but on this I don't know how to split my YUV-buffer into Y and UV.

Do you have any new examples/samples for yuv-rendering which are not outdated and working?

BDL
  • 21,052
  • 22
  • 49
  • 55
David
  • 273
  • 2
  • 20
  • I believe you could create `UIImage` from YUV buffer which may then be used to extract an RGB buffer from it (there are posts about this conversion already). You should also be able to do it all in the shader but from result you are reporting there must be some issue with your original data or its layout. Unfortunately Swift is extremely stupid to work with when it comes to data buffers. I would use Objective-C for this even if it means adding just a single class that uses it. In any case you could add some code on how you used glTexImage2D to produce your results. – Matic Oblak Nov 02 '17 at 13:39
  • @MaticOblak do you have a link/method for how to exctract the rgb-buffer from the UIImage into a [UInt8]-array? All the methods I've seen are not working in Swift3 – David Nov 07 '17 at 12:43
  • No, sorry. I would avoid doing this in Swift by all means. – Matic Oblak Nov 07 '17 at 13:01
  • and how would it look like in objective c ? – David Nov 07 '17 at 13:32
  • You should be able to find a few procedures here on SO. The output should be an (uint8_t *) or at least it may be interpreted as such. By using a bridging header I assume you will receive an unsafe pointer in Swift which should then be used to generate uint8 array using some sort of memory layout. So basically you can go in and out of objective C and continue your flow. – Matic Oblak Nov 07 '17 at 15:02

1 Answers1

0

If you need only to display that image/video, then you don't really need to convert it to rgb texture. You can bind all 3 planes (Y/Cb/Cr) as separate textures, and perform yuv->rgb conversion in fragment shader, with just a three dot products.

Vlad
  • 5,450
  • 1
  • 12
  • 19
  • how can I do this if I only have got one buffer (how can I split it to Y/Cb/Cr?)? I need to render these packets, there are many of them not only one buffer. – David Nov 07 '17 at 08:27
  • Typically you get yuv420 as planar format, i.e. chroma values aren't interleaved with luma values, but comes as 3 adjacent arrays. You just find offsets from the beginning of the buffer to each plane. Luminance plane always goes first, and Cb/Cr goes as next arrays, sometimes in different order (Cb,Cr or Cr,Cb), depending on exact format (I420 vs YV12) – Vlad Nov 07 '17 at 09:36
  • You mean that there are 3 arrays (Y/Cb/Cr) in the one array (my yuv buffer)? How do I find out the offset then for each? Yes,it is yuv420! – David Nov 07 '17 at 09:57
  • You need to confirm you have planar I420 or YV12. On its own YUV 4:2:0 defines only chroma subsampling, but not the storage format. Once you know that you have planar format, you can easily calculate offsets. Y plane is always at the beginning. U (or V) will be the next, at the offset of `pic_width*pic_heigh`t. The next plane will be at the offset `pic_width*pic_height+(pic_width/2)*(pic_height/2)`. Chroma planes has its dimensions halved on each axis. – Vlad Nov 07 '17 at 10:18