1

I'm starting with OpenGL, and I want to create a tone mapping - algorithm.

I know that my first step is get the max/min luminance value of the HDR image.

I have the image in a texture in FBO, and I'm not sure how to start.

I think the best way is to pass tex coords to a fragment shader and then go through all the pixels and generates somehow smaller textures.

But, I don't know how to do downsampling manually until I had a 1x1 texture; should I had a lot of FBO? where I create each new texture?

I searched a lot of info but I still have no clear almost anything.

I would appreciate some help to situate myself and to start.

EDIT 1. Here's my shaders, and how I pass texture coords to vertex shader:

To pass texture coords and vertex positions, I draw a quad using VBO:

void drawQuad(Shaders* shad){
  // coords: vertex (3) + texture (2)
  std::vector<GLfloat> quadVerts = {
    -1, 1, 0, 0, 0,
    -1, -1, 0, 0, 1,
    1, 1, 0, 1, 0,
    1, -1, 0, 1, 1}; 

  GLuint quadVbo;
  glGenBuffers(1, &quadVbo);
  glBindBuffer(GL_ARRAY_BUFFER, quadVbo);
  glBufferData(GL_ARRAY_BUFFER, 4 * 5 * sizeof(GLfloat), &quadVerts[0], GL_STATIC_DRAW);
  // Shader attributes
  GLuint vVertex = shad->getLocation("vVertex");
  GLuint vUV = shad->getLocation("vUV");

  glEnableClientState(GL_VERTEX_ARRAY);
  glVertexPointer(3, GL_FLOAT, 3 * sizeof(GLfloat), NULL);
  // Set attribs
  glEnableVertexAttribArray(vVertex);
  glVertexAttribPointer(vVertex, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, 0);
  glEnableVertexAttribArray(vUV);
  glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, (void*)(3 * sizeof(GLfloat)));

  glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);  // Draw

  glBindBuffer(GL_ARRAY_BUFFER, 0);
  glDisableVertexAttribArray(vVertex);
  glDisableVertexAttribArray(vUV);
}

Vertex shader:

#version 420

in vec2 vUV;
in vec4 vVertex;
smooth out vec2 vTexCoord;

uniform mat4 MVP;
void main()
{
  vTexCoord = vec2(vUV.x * 1024,vUV.y * 512);
  gl_Position = MVP * vVertex;
}

And fragment shader:

#version 420

smooth in vec2 vTexCoord;
layout(binding=0) uniform sampler2D texHDR; // Tex image unit binding
layout(location=0) out vec4 color; //Frag data output location
vec4[4] col;
void main(void)
{
    for(int i=0;i<=1;++i){
        for(int j=0;j<=1;++j){
           col[(2*i+j)] = texelFetch(texHDR, ivec2(2*vTexCoord.x+i,2*vTexCoord.y+j),0);
        }
    }
    color = (col[0]+col[1]+col[2]+col[3])/4;    
}

In this test code, I have a texture with size 1024*512. My idea is to render to texture attached to GL_ATTACHMENT_0 in a FBO (layout(location=0)) using this shaders and the texture binded in GL_TEXTURE_0 which has the image (layout(binding=0)). My target is to have the image in texHDR in my FBO texture reducing its size by two.

MikeFadeCrew
  • 107
  • 2
  • 11

1 Answers1

1

For downsampling, all you need to do in the fragment shader is multiple texture lookups, then combine them for the output fragment. For example, you could do 2x2 lookups, so each pass would reduce the resolution in x and y by a factor 2.

Let's say you want to reduce a 1024x1024 image. Then you would render a quad into a 512x512 image. Set it up so your vertex shader simply generates values for x and y between 0 and 511. The fragment shader then calls texelFetch(tex, ivec2(2*x+i,2*y+j)), where i and j loop from 0 to 1. Cache those four values, output min and max in your texture.

Andreas Haferburg
  • 5,189
  • 3
  • 37
  • 63
  • Thanks for your answer! I'm trying and I realise that, when I pass texcoords to vertex shader from the VBO, the values are `[0,1]`, so for my texture (which is 1024 x 512) I modified this line with this: `texelFetch(tex, ivec2((2*x+i)/1024,(2*y+j)/512))` but the display shows empty texture. – MikeFadeCrew Dec 03 '14 at 19:14
  • I don't understand what you mean. `texelFetch` takes texel coordinates in the range [0, width], [0, height], so your code doesn't really make sense if you reduce by a factor 2. If you want to post code, you could edit your question. – Andreas Haferburg Dec 05 '14 at 12:07
  • I edit that, and adapted the code you say before (`texelFetch(tex, ivec2(2*x+i,2*y+j)`) by multiplying the coords I pass to vertex shader (between [0..1]) by texture size. Would be correct? – MikeFadeCrew Dec 06 '14 at 10:17
  • 1
    Yea, in principle it's looking good. But you only ever talk about one texture size. If your input image is 1024x512, then your output image is 512x256, right? For each reduction step you need two textures, the source and the target, and the target is half the size of the source. You would want to render to the target texture with your shader, so the fragment shader gets called once for each target pixel, and each call processes four pixels from the source. – Andreas Haferburg Dec 06 '14 at 11:07
  • 1
    So that's one iteration. For the whole algorithm you want to ping pong between two textures. So step 1: Source = tex0 at 1024x512, target = tex1 at 512x256. Step 2: Source = tex1 at 512x256, target = tex0 at 256x128. Step 3: Source = tex0 at 256x128, target = tex1 at 128x64, etc. – Andreas Haferburg Dec 06 '14 at 11:12
  • Yes, I was thinking on that right now. My doubt was how to handle all this textures until I get 1x1 texture. So you say that my FBO should have two textures? (One `GL_ATTACHMENT_0` and the other `GL_ATTACHMENT_1` for example) and change in each iteration which is target and which is source? And in the first iteration the source would be original texture (in my case, the 1024x512) ? – MikeFadeCrew Dec 06 '14 at 11:41
  • But with that process, how I set the values of my fragment shader attributes? `layout(binding=0)` would be `layout(location=0/1)` and `layout(location=0) out` would be `layout(location=0/1) out`, maybe I would have to change this values out of the shader? Deleting the `(location=X)` and `(binding=0)`. – MikeFadeCrew Dec 06 '14 at 12:18
  • @MikeFadeCrew You need to rebind the textures before you call the shader, the shader stays the same. You just need to set the size of the textures as uniforms. – Andreas Haferburg Dec 08 '14 at 12:50
  • I think I get it, thanks. But now I want to get max/min and average values of luminance of the image. I think that I have to create variables (min/max/average) into fragment shader, and in each time update its values, but I don't find how to do it. – MikeFadeCrew Dec 09 '14 at 12:51
  • Pretty much the same as in C++. For that you would want to output a special RGB image, e.g. with .r = min, .g = max, .b = sum. But if you need more help with a specific problem you should start another question, this comment chain is getting a little long. :) – Andreas Haferburg Dec 09 '14 at 14:56