1

Unfortunately many tutorials describe the TBN matrix as a de-facto must for any type of normal mapping without getting too much into details on why that's the case, which confused me on one particular scenario

Let's assume I need to apply bump/normal mapping on a simple quad on screen, which could later be transformed by it's normal matrix

If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?

vec3 bumpnormal = texture2D(texture, Coord.xy);
bumpnormal = mat3(model) * bumpnormal;  //assuming no scaling occured

I do understand how things would change if we were computing the bumpnormal on a cube without taking in count how different faces with the same texture coordinates actually have different orientations, which leads me to the next question

Assuming that an entire model uses only a single normalmap texture, without any repetition of said texture coordinates in different parts of the model, is it possible to save those 6 floats of the tangent/bitangent vectors stored for each vertex and the computation of the TBN matrix altogheter while getting the same results by simply transforming the bumpnormal with the model's matrix? If that's the case, why isn't it the preferred solution?

Row Rebel
  • 267
  • 3
  • 11

2 Answers2

6

If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?

No.

Let's say the value you get from the normal map is (1, 0, 0). So that means the normal in the map points right.

So... where is that exactly? Or more to the point, what space are we in when we say "right"?

Now, you might immediately think that right is just +X in model space. But the thing is, it isn't. Why?

Because of your texture coordinates.

If your model-space matrix performs a 90 degree rotation, clockwise, around the model-space Z axis, and you transform your normal by that matrix, then the normal you get should go from (1, 0, 0) to (0, -1, 0). That is what is expected.

But if you have a square facing +Z, and you rotate it by 90 degrees around the Z axis, should that not produce the same result as rotation the texture coordinates? After all, it's the texture coordinates who define what U and V mean relative to model space.

If the top-right texture coordinate of your square is (1, 1), and the bottom left is (0, 0), then "right" in texture space means "right" in model space. But if you change the mapping, so that (1, 1) is at the bottom-right and (0, 0) is at the top-left, then "right" in texture space has become "down" (-Y) in model space.

If you ignore the texture coordinates, the mapping from the model space positions to locations on the texture, then your (1, 0, 0) normal will be still pointing "right" in model space. But your texture mapping says that it should be pointing down (0, -1, 0) in model space. Just like it would have if you rotated model space itself.

With a tangent-space normal map, normals stored in the texture are relative to how the texture is mapped onto a surface. Defining a mapping from model space into the tangent space (the space of the texture's mapping) is what the TBN matrix is for.

This gets more complicated as the mapping between the object and the normals gets more complex. You could fake it for the case of a quad, but for a general figure, it needs to be algorithmic. The mapping is not constant, after all. It involves stretching and skewing as different triangles use different texture coordinates.

Now, there are object-space normal maps, which generate normals that are explicitly in model space. These avoid the need for a tangent-space basis matrix. But it intimately ties a normal map to the object it is used with. You can't even do basic texture coordinate animation, let alone allow a normal map to be used with two separate objects. And they're pretty much unworkable if you're doing bone-weight skinning, since triangles often change sizes.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
1

http://www.thetenthplanet.de/archives/1180

vec3 perturb_normal( vec3 N, vec3 V, vec2 texcoord )
{
    // assume N, the interpolated vertex normal and 
    // V, the view vector (vertex to eye)
    vec3 map = texture2D( mapBump, texcoord ).xyz;
#ifdef WITH_NORMALMAP_UNSIGNED
    map = map * 255./127. - 128./127.;
#endif
#ifdef WITH_NORMALMAP_2CHANNEL
    map.z = sqrt( 1. - dot( map.xy, map.xy ) );
#endif
#ifdef WITH_NORMALMAP_GREEN_UP
    map.y = -map.y;
#endif
    mat3 TBN = cotangent_frame( N, -V, texcoord );
    return normalize( TBN * map );
}

Basically I think you are describing this method. Which I agree is superior in most respects. It makes later calculations much more clean instead of devolving into a mess of space transformation.

Instead of calculating everything into the space of the tangents you just find what the correct world space normal is. That's what I am using in my projects and I am very happy I found this method.

Yudrist
  • 129
  • 7
  • 1
    "*It makes later calculations much more clean instead of devolving into a mess of space transformation.*" You do realize that the only difference between what you've done here and regular tangent-space normal computations is how you compute the matrix, right? If you want to transform normals from tangent space to model space, you can do that with 3 vectors interpolated from per-vertex data. They would simply represent the inverse of the usual TBN matrix, which you can do offline as a post-process. – Nicol Bolas Jul 18 '16 at 19:10
  • 1
    The reason to do it the other way is performance. Your way does a matrix multiply into model space, followed by another transform into view space, where you finally do your lighting. If you transform the view into tangent space, you only have one matrix multiply to do. – Nicol Bolas Jul 18 '16 at 19:12
  • Thanks captain obvious. The performance is better on modern hardware though, anyway. – Yudrist Jul 18 '16 at 19:29
  • *"The performance is better on modern hardware"* than what? This method is building the tangent space matrix by calculating tangents and bitangents from screenspace derivatives *per pixel*, I highly doubt that this is faster than having an additional vec4 in your vertex structure and building the tangent frame in the vertex shader. Also this is clearly not what the OP asked for... – LJᛃ Jul 18 '16 at 23:09
  • 1
    @LJᛃ: Well, it is a per-vertex cost. Two extra normals cost 4 bytes each per-vertex. Plus, more interpolants means higher interpolation costs. This is especially important for tile-based hardware which has to store per-vertex data. And the cost of doing a derivative is pretty low. It's just doing a difference between two values. The bigger cost is doing the extra matrix math to do lighting in view space instead of tangent space. – Nicol Bolas Jul 19 '16 at 00:23
  • @NicolBolas one just needs to provide the tangent and store the bitangents handedness in its fourth component, thus just 4bytes per-vertex are needed + a simple cross product and a multiply in the vertex shader `B=NxT.xyz*T.w`, since vertex shaders are rarely the bottleneck in modern applications this seems like a fair trade-off. Yeah derivatives are cheap(almost free on some hardware) however my emphasis was on the *per-pixel* aspect here, the `cotangent_frame` function has quite a few instructions besides the derivatives. – LJᛃ Jul 19 '16 at 02:52
  • Don't get me wrong, this is certainly a viable approach, but I highly doubt its faster than deriving the matrix in the vertex shader and interpolating it. – LJᛃ Jul 19 '16 at 02:55
  • 2
    @LJᛃ "*a simple cross product*" And if S and T are orthogonal, then that would be true. But since that's highly unlikely to be true on most meshes, then a [cross product is merely an *approximation* of the real answer](http://stackoverflow.com/a/15434343/734069). At least the derivative method is mathematically sound. – Nicol Bolas Jul 19 '16 at 02:59
  • @NicolBolas thats true, shouldn't be that bad of an approximation for properly uv mapped meshes, but still, you got your point. – LJᛃ Jul 19 '16 at 12:51