0

I'm working on displaying yuv pictures with OpenGL ES3.0 on Android. I convert yuv pixels to rgb in a fragment shader. At first, I need to pass yuv pixel data to OpenGL as a texture. When the yuv data is 8 bit-depth, I program like below and it works:

    GLenum glError;
    GLuint tex_y;
    glGenTextures(1, &tex_y);
    glBindTexture(GL_TEXTURE_2D, tex_y);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // pix_y is y component data.
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, y_width, y_height, 0, GL_LUMINANCE,GL_UNSIGNED_BYTE, pix_y);
    glGenerateMipmap(GL_TEXTURE_2D);
    glError = glGetError();
    if (glError != GL_NO_ERROR) {
        LOGE(TAG, "y, glError: 0x%x", glError);
    }

However there are some yuv formats with more depth like YUV420P10le. I don't want to lose the benefit of more depth, so I convert yuv data which has more than 8 bit depth to 16 bit by shifting data (for example: yuv420p10le, y_new = y_old << 6)。 Now I want to generate a 16 bit depth texture, but I always fail GL_INVALID_OPERATION. Below is the code to create a 16 bit texture:

// rest part is the same with 8 bit texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, y_width, y_height, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, pix_y);

I've tried many format combinations in https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml, none of them succeed.

By the way, I also tested on MacOS OpenGL 3.3 and succeeded, I just need to pass data as one channel data of RGB..


// Code on MacOS OpenGL3.3. The data format depends on y depth, GL_UNSIGNED_BYTE or 
// GL_UNSIGNED_SHORT. Using this config, I can access the data in RED channel of textures
// which is normalized to [0.0f, 1.0f]. However, this config doesn't work on OpenGL ES3.0
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, y_width, y_height, 0, GL_RED, dataFormat, y);

genpfault
  • 51,148
  • 11
  • 85
  • 139
zuguorui
  • 15
  • 4
  • 1
    For a specific format you have to use one of the Sized Internal Formats (see [`glTexImage2D`](https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml)). Unfortunately there is no `GL_R16` format. However, you can use `GL_R32F`. – Rabbid76 Dec 15 '22 at 16:02
  • @Rabbid76 Yes, I tried with internalFormat=GL_R32F, format=GL_RED, type=GL_FLOAT yesterday and it works (in fact, GL_R16F, GL_RED, GL_HALF_FLOAT also works, but c++ doesn't have 16bit float type). Now I use float as texture data type, it's also convenient to support HDR. – zuguorui Dec 16 '22 at 06:40
  • `GL_RED`, `GL_HALF_FLOAT` or `GL_RED`, `GL_FLOAT` only specifies the type of the source data, but does not effect the internal format of the texture. Th internal format is specified with `GL_R32F`, `FL_R16F`. The image data is converted when transferred from the CPU to GPU. – Rabbid76 Dec 16 '22 at 07:39

0 Answers0