Unfortunately many tutorials describe the TBN matrix as a de-facto must for any type of normal mapping without getting too much into details on why that's the case, which confused me on one particular scenario
Let's assume I need to apply bump/normal mapping on a simple quad on screen, which could later be transformed by it's normal matrix
If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?
vec3 bumpnormal = texture2D(texture, Coord.xy);
bumpnormal = mat3(model) * bumpnormal; //assuming no scaling occured
I do understand how things would change if we were computing the bumpnormal on a cube without taking in count how different faces with the same texture coordinates actually have different orientations, which leads me to the next question
Assuming that an entire model uses only a single normalmap texture, without any repetition of said texture coordinates in different parts of the model, is it possible to save those 6 floats of the tangent/bitangent vectors stored for each vertex and the computation of the TBN matrix altogheter while getting the same results by simply transforming the bumpnormal with the model's matrix? If that's the case, why isn't it the preferred solution?