0

I am trying to capture the image data in the onFrameAvailable method from a Google Tango. I am using the Leibniz release. In the header file it is said that the buffer contains HAL_PIXEL_FORMAT_YV12 pixel data. In the release notes they say the buffer contains YUV420SP. But in the documentation it is said the pixels are RGBA8888 format (). I am a little confused and additionally. I don't really get image data but a lot of magenta and green. Right now I am trying to convert from YUV to RGB similar to this one. I guess there is something wrong with the stride, too. Here eís the code of the onFrameAvailable method:

int size = (int)(buffer->width * buffer->height);
for (int i = 0; i < buffer->height; ++i)
{
   for (int j = 0; j < buffer->width; ++j)
   {
       float y = buffer->data[i * buffer->stride + j];
       float v = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size];
       float u = buffer->data[(i / 2) * (buffer->stride  / 2) + (j / 2) + size + (size / 4)];

               const float Umax = 0.436f;
               const float Vmax = 0.615f;

               y = y / 255.0f;
               u =  (u / 255.0f - 0.5f) ;
               v =  (v / 255.0f - 0.5f) ;

               TangoData::GetInstance().color_buffer[3*(i*width+j)]=y;
               TangoData::GetInstance().color_buffer[3*(i*width+j)+1]=u;
               TangoData::GetInstance().color_buffer[3*(i*width+j)+2]=v;
   }
}

I am doing the yuv to rgb conversion in the fragment shader.

Has anyone ever obtained an RGB image for the Google Tango Leibniz release? Or had someone similar problems when converting from YUV to RGB?

Community
  • 1
  • 1
guppy
  • 13
  • 2

3 Answers3

0

YUV420SP (aka NV21) is correct for the time being. An explanation is here. In this format you have a width x height array where each element is a Y byte, followed by a width/2 x height/2 array where each element is a V byte and a U byte. Your code is implementing YV21, which has separate arrays for V and U instead of interleaving them in one array.

You mention that you are doing YUV to RGB conversion in a fragment shader. If all you want to do with the camera images is draw then you can use TangoService_connectTextureId() and TangoService_updateTexture() instead of TangoService_connectOnFrameAvailable(). This approach delivers the camera image to you already in an OpenGL texture that gives your fragment shader RGB values without bothering with the pixel format details. You will need to bind to GL_TEXTURE_EXTERNAL_OES (instead of GL_TEXTURE_2D), and your fragment shader would look something like this:

#extension GL_OES_EGL_image_external : require

precision mediump float;

varying vec4 v_t;
uniform samplerExternalOES colorTexture;

void main() {
   gl_FragColor = texture2D(colorTexture, v_t.xy);
}

If you really do want to pass YUV data to a fragment shader for some reason, you can do so without preprocessing it into floats. In fact, you don't need to unpack it at all - for NV21 just define a 1-byte texture for Y and a 2-byte texture for VU, and load the data as-is. Your fragment shader will use the same texture coordinates for both.

rhashimoto
  • 15,650
  • 2
  • 52
  • 80
  • Thank you for the answer. It really helps to be certain which format the data is! I need to get the camera data and it is possible that I need to process it in the fragment shader or maybe even before that. I now tried to just process the grey data but that output didn't work right too. I just need to read the first half of the buffer height to get the data right? Or is the width and height actually Y data and the buffer is actually bigger? I created a color buffer, thus I don't work with a texture and therefore the shader doesn't use texture coordinates. (I need to do that for future tasks). – guppy May 14 '15 at 17:48
  • `width` and `height` will be the image size in pixels. Y is sampled at every pixel so `data` starts with `width` x `height` bytes of Y (possibly with padding depending on `stride`). – rhashimoto May 14 '15 at 18:06
  • Than the loop should be right :) Good to know! I encountered another problem since the Leibniz release. After a few frames the camera seems to crash which leads to problems with the rendering (just black pixels): This is the error message (The first of them says RAW instead of YUV): 05-14 19:05:18.878 190-5445/? E/camera-metadata﹕ /home/ubuntu/jobs/redwood_internal/RedwoodInternal/Redwood/common/player-engine/src/camera-metadata.cc:56 YUV failed to match frame 1545.014677 – guppy May 14 '15 at 19:15
0

By the way, if someone experienced problems with capturing the image data on the Leibniz release, too: One of the developers told me that there is a bug concerning the camera and that it should be fixed with the Nash release.

The bug caused my buffer to be null but when I used the Nash update I got data again. However, right now the problem is that the data I am using doesn't make sense. I guess/hope the cause is that the Tablet didn't get the OTA update yet (there can be a gap between the actual release date and the OTA software update).

guppy
  • 13
  • 2
0

Just try code following:

//C#    
public bool YV12ToPhoto(byte[] data, int width, int height, out Texture2D photo)
        {
            photo = new Texture2D(width, height);

            int uv_buffer_offset = width * height;

            for (int i = 0; i < height; i++)
            {
                for (int j = 0; j < width; j++)
                {
                    int x_index = j;
                    if (j % 2 != 0)
                    {
                        x_index = j - 1;
                    }

                    // Get the YUV color for this pixel.
                    int yValue = data[(i * width) + j];
                    int uValue = data[uv_buffer_offset + ((i / 2) * width) + x_index + 1];
                    int vValue = data[uv_buffer_offset + ((i / 2) * width) + x_index];

                    // Convert the YUV value to RGB.
                    float r = yValue + (1.370705f * (vValue - 128));
                    float g = yValue - (0.689001f * (vValue - 128)) - (0.337633f * (uValue - 128));
                    float b = yValue + (1.732446f * (uValue - 128));

                    Color co = new Color();
                    co.b = b < 0 ? 0 : (b > 255 ? 1 : b / 255.0f);
                    co.g = g < 0 ? 0 : (g > 255 ? 1 : g / 255.0f);
                    co.r = r < 0 ? 0 : (r > 255 ? 1 : r / 255.0f);
                    co.a = 1.0f;

                    photo.SetPixel(width - j - 1, height - i - 1, co);
                }
            }

            return true;
        }

I have succeeded.

Hao Li
  • 1
  • Welcome to StackOverflow! Please could you add a description to tell the op how your solution works? – edcs May 10 '17 at 14:52