0

I'm trying to create a tone-mapping operator that changes on real time.

I have the LDR image (after apply the operator to the HDR image) on a spherical texture, on a sphere which always rotates about Y axis, and the camera is inside the sphere; here is an example:

I'm doing manual downsampling to calculate the log average luminance of the image, it works fine now.

Now, I want the operator to change on real time, so I want to calculate the log average luminance only for the portion of the image I'm seeing at each frame, to recalculate and change the operator effect on real time.

So, to do that, I need to know what coords of my texture are on screen at each frame, to select them and discard the others for downsampling.

Any help?

genpfault
  • 51,148
  • 11
  • 85
  • 139
MikeFadeCrew
  • 107
  • 2
  • 11
  • Are you doing that in the fragment shader? If so, everything within the bounds of the window gets discarded anyway. – datenwolf Mar 14 '15 at 11:55
  • No, the fragment shader does the downsampling from all the image (all the coords); I need to do that only for the coords on the screen, which are differents at each frame, but I don't know how to know that. – MikeFadeCrew Mar 14 '15 at 12:25
  • Could you please post some source code. I highly doubt that the fragment shader has two nested for loops that iterate over the whole picture. Of course may be doing the processing step to a FBO and then show that. But as long as the processing happens in the fragment shader that renders to the on-screen framebuffer, the fragment shader gets executed for just those pixels which are visible; even if the texture coordinates to the fragment shader span outside the visible area. – datenwolf Mar 14 '15 at 13:31
  • The problem is, that you're dealing with curvilinear coordinates here and depending on the particular mapping used there is likely no closed solution for the inverse mapping; which means you'd have to numerically approximate an inverse mapping, to find the input image boundaries corresponding to the output. So if you can implement the culling without jumping that hoop – and I'm certain that your naive implementation actually already culls the invisible fragments – that you should go for that. – datenwolf Mar 14 '15 at 13:36
  • I do the downsampling using off-screen rendering in a FBO; then I use the same FBO (in other texture attachment) to fill LDR texture using the tone-mapping operator. Then I render that to the sphere on the screen. – MikeFadeCrew Mar 14 '15 at 14:01
  • Okay, so you have to go that inverse mapping route. My suggestion: Try finding the inverse mapping of the viewport boundaries from the sphere to the original FBO. If this is a true spherical projection it actually has a closed solution. Use this inverse to first draw a stencil to your processing FBO to mask out those pixels that you don't need (leave a few pixels around the edges for some filtering margin in the final sampler); after masking the FBO draw to it using your processing shader. – datenwolf Mar 14 '15 at 14:45
  • 1
    Since you're already downsampling, you can do this for _any_ arbitrary geometry if you write an RGBA min-max mipmap texture containing the u,v coordinates themselves. R and G would contain the largest U and V values and B and A would contain the smallest U and V coordinates. When building each mipmap LOD, sample the 4 original neighbor texels and store the min and max - repeat until you hit the 1x1 LOD. Then it's a single fetch of the lowest resolution LOD to determine the min/max; practically the same thing you're already doing for the tonemap. – Andon M. Coleman Mar 14 '15 at 23:26
  • I get your point Andon, but my doubt is how to know what are that coordinates (the largest and smallest coordinates) at each moment, because the sphere is always rotating, and at each frame I see a different part of the image. – MikeFadeCrew Mar 16 '15 at 20:00
  • @MikeFadeCrew: Maybe you don't get my point? This sort of thing would be done by rendering that sphere to a Framebuffer Object and outputting the texture coordinates at that instant as a color to a texture attachment. Then you could process that texture in a second pass and generate the mipmap LODs for it. This would be an inefficient way of doing it if you have a well-defined surface like a sphere, but if you wanted to know the range of coordinates on screen for some really unusual shape and/or projection it is a natural approach to the problem. I use something similar for volumetric shadows. – Andon M. Coleman Mar 16 '15 at 23:17
  • I see the process: after render sphere with LDR image on screen, I have to render that on a FBO, store u,v coords in other tex. After that, do downsampling only for a portion. But I don't see how I fill that coords. In each pass of shader, I have actual u,v coords of that pass, so I can store _r_ and _g_, but how I know which are largest or smallest? I don't see that; if I do that, auxiliar texture would have only u,v coordinates in each component. For example, in (5,6) component of that auxiliar texture, I understand that I would have '5' like _r_ component and '6' like _g_ component. – MikeFadeCrew Mar 21 '15 at 13:20
  • @andonM.Coleman I'm still blocked. Could u explain more the process? I can't see how to create the shader to keep only the coordinates on the screen, and how to know in downsampling process the size of the texture to fill and reduce to 1x1. Now I'm downsampling using 'ping-pong' technique between source texture and an empty one. – MikeFadeCrew Apr 03 '15 at 09:33

0 Answers0