3

Problem Explaination

I am currently implementing point lights for a deferred renderer and am having trouble determining where a the heavy pixelization/triangulation that is only noticeable near the borders of lights is coming from.

The problem appears to be caused by loss of precision somewhere, but I have been unable to track down the precise source. Normals are an obvious possibility, but I have a classmate who is using directx and is handling his normals in a similar manner with no issues.

From about 2 meters away in our game's units (64 units/meter):

deferred_full_scene

A few centimeters away. Note that the "pixelization" does not change size in the world as I approach it. However, it will appear to swim if I change the camera's orientation:

deferred_closeup

A comparison with a closeup from my forward renderer which demonstrates the spherical banding that one would expect with a RGBA8 render target (only 0-255 possible values for each color). Note that in my deferred picture the back walls exhibit normal spherical banding:

closeup_forward

The light volume is shown here as the green wireframe:

light_volume_and_scene

As can be seen the effect isn't visible unless you get close to the surface (around one meter in our game's units).


Position reconstruction

First, I should mention that I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.

To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer. This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).

position_reconstruction


Geometry Buffer

My gBuffer setup is:

enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = {  GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt]          = {   GL_RED,  GL_RGBA,  GL_RGBA,  GL_RGBA };
GLint types[num_rt]            = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
  glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
  glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);

Community
  • 1
  • 1
Peter Clark
  • 2,863
  • 3
  • 23
  • 37

1 Answers1

3

Normal Precision

The problem was that my normals just didn't have enough precision. At 8 bits per component that means 255 discrete possible values. Examining the normals in my gbuffer overlaid ontop of the lighting showed a 1-1 correspondence with normal value to lit "pixel" value.

I am unsure why my classmate does not get the same issue (he is going to investigate further).

normal_precision

After some more research I found that a term for this is quantization. Another example of it can be seen here with a specular highlight on page 19.


Solution

After changing my normal render target to RG16F the problem is resolved.

Using method suggested here to store and retrieve normals I get the following results:

no_quantization

I now need to store my normals more compactly (I only have room for 2 components). This is a good survey of techniques if anyone finds themselves in the same situation.


[EDIT 1]

As both Andon and GuyRT have pointed out in the comments, 16 bits is a bit overkill for what I need. I've switched to RGB10_A2 as they suggested and it gives very satisfactory results, even on rounded surfaces. The extra 2 bits help a lot (256 vs 1024 discrete values).

Here's what it looks like now.

enter image description here

It should also be noted (for anyone that references this post in the future) that the image I posted for RG16F has some undesirable banding from the method I was using to compress/decompress the normal (there was some error involved).


[EDIT 2]

After discussing the issue some more with a classmate (who is using RGB8 with no ill effects), I think it is worth mentioning that I might just have the perfect combination of elements to make this appear. The game I'm building this renderer for is a horror game that places you in pitch black environments with a sonar-like ability. Normally in a scene you would have a number of lights at different angles (my classmate's environments are all very well lit - they're making an outdoor racing game). That combined with the fact that it only appears on very round objects relatively close up might be why I provoked this. This is all just a (slightly educated) guess on my part.

Community
  • 1
  • 1
Peter Clark
  • 2,863
  • 3
  • 23
  • 37
  • 2
    That should not be producing these results. This looks more like an issue to do with texture filtering. I have successfully implemented deferred shading many times using an 8-bit unsigned normalized image format for the normal buffer. There are only a few algorithms where you benefit from higher precision, and even then I have found that `GL_RGB10_A2` works very well (same storage requirements as RGB8, but 2-bits of extra precision for X,Y,Z and an extra 2-bits you might be able to encode something in). 16-bit floating-point is overkill for the normals, it is a major waste of memory bandwidth. – Andon M. Coleman Mar 10 '14 at 19:03
  • 1
    I have seen a fair amount of conflicting information on this, with some people saying that 8 bits per component is definitely not enough, meanwhile others have had no issue with it. When you've implemented it, have you done any kinda of post-process blurring or anti-aliasing that could have perhaps resolved the issue (I currently have neither)? Regarding texture filtering, for all textures in my gbuffer I use nearest sampling in all cases. It should be noted that this effect is only noticeable on rounded surfaces where the normal is changing steadily in small amounts (ex. surface of a sphere). – Peter Clark Mar 10 '14 at 19:14
  • 2
    It will definitely go away if you linearly interpolate your normal buffer (that is the only form of anti-aliasing I used). I would highly suggest that you do this, and unlike sampling from a mipmapped normal map you should not need to re-normalize the normals after sampling with a linear filter. Is your GBuffer resolution different from the default framebuffer, by the way? Linear filtering works very well for GBuffers of differing resolution; not as well as bilateral filtering, but that is a very advanced thing to implement. – Andon M. Coleman Mar 10 '14 at 19:19
  • 1
    All the textures in my gbuffer are the same resolution as my viewport currently. I am interpolating the normal on the mesh itself when I write it into the gbuffer, and this is where I get the clumps. I'm don't think interpolating the normal buffer would help in this case because each clump of values is only one away from it's neighbors. For example, 3 clumps next to each-other might have the values [**19**, 61, 247], [**20**, 61, 247], [**21**, 61, 247]. Linearly interpolating 19->20 or 20->21 wouldn't yield any more accurate results (at least it doesn't makes sense to me that it would). – Peter Clark Mar 10 '14 at 19:30
  • 2
    I noticed exactly the same artifact when I used 8-bit (signed) normals. Again, it was only noticeable close to rounded surfaces. I can't remember the format I changed to, but I think the extra couple of bits `GL_RGB10_A2` (as suggested by Andon) gives you should be enough. – GuyRT Mar 11 '14 at 14:21
  • @AndonM.Coleman you should post your top comment as an answer. – Connor Hollis Mar 11 '14 at 19:57
  • I've edited my answer to reflect that 10 bits gives a very satisfactory result. Just to clarify, what do you mean by signed? If you're referring to the normal, when I was using 8 bits I would shift it from [-1,1] to [0,1]. If you're referring to the internal storage, From my understanding each component in RGBA8 is a ubyte. I ask this because of the fact that Andon says 8 bit unsigned worked for him but you said 8 bit signed didn't work for you so I'm wondering if I'm missing something here. – Peter Clark Mar 11 '14 at 19:59
  • 2
    @PeterClark: There is a new type of fixed-point (normalized integer) image format available in GL3 called `Signed Normalized (SNORM)`. `GL_RGBA8` is what is known as ***unsigned normalized***. GL does not have a special type designation for this (it is implied), but D3D10+ does (it has formats called SNORM and formats called UNORM). In any event, a signed normal format like GuyRT is describing would be `GL_RGBA8_SNORM`, and it avoids having to scale and bias the values into the range [0,1] for storage and then back to [-1,1] for use. – Andon M. Coleman Mar 11 '14 at 20:07