1

As I know, when I use OpenGL column-major mathematically, I have to use column-major layout in memory. So, I expect that the columns of the LookAt matrix represent the bases (axes) of a coordinate system.

My matrix implementation is column-major in memory. But when I set the bases of camera coordinate system as columns of matrix, it doesn't work right:

Result.Elements[0 + 0*4] = Right.x;
Result.Elements[1 + 0*4] = Right.y;
Result.Elements[2 + 0*4] = Right.z;
    
Result.Elements[0 + 1*4] = Up.x;
Result.Elements[1 + 1*4] = Up.y;
Result.Elements[2 + 1*4] = Up.z;
    
Result.Elements[0 + 2*4] = Forward.x;
Result.Elements[1 + 2*4] = Forward.y;
Result.Elements[2 + 2*4] = Forward.z;
Result.Elements[3 + 3*4] = 1.0f;

Result.Elements[0 + 3*4] = -DotProduct(From, Right);
Result.Elements[1 + 3*4] = -DotProduct(From, Up);
Result.Elements[2 + 3*4] = -DotProduct(From, Forward);

But when I set the bases as rows of matrix, it works exactly right (I still set translation part of the matrix as a column):

Result.Elements[0 + 0*4] = Right.x;
Result.Elements[0 + 1*4] = Right.y;
Result.Elements[0 + 2*4] = Right.z;
    
Result.Elements[1 + 0*4] = Up.x;
Result.Elements[1 + 1*4] = Up.y;
Result.Elements[1 + 2*4] = Up.z;
    
Result.Elements[2 + 0*4] = Forward.x;
Result.Elements[2 + 1*4] = Forward.y;
Result.Elements[2 + 2*4] = Forward.z;
Result.Elements[3 + 3*4] = 1.0f;
    
Result.Elements[0 + 3*4] = -DotProduct(From, Right);
Result.Elements[1 + 3*4] = -DotProduct(From, Up);
Result.Elements[2 + 3*4] = -DotProduct(From, Forward);

I can't understand why it is the case.

I know that:

For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16.

A1 A5 A9 A13
A2 A6 A10 A14
A3 A7 A11 A15
A4 A8 A12 A16

So if I use column-major order in OpenGL, I use column-major layout in memory (for A13, A14, A15 to be the translation components).

The implementations of my matrix-matrix multiplication and cross-product, if it can help:

mat4 operator*(mat4 A, mat4 B)
{
    mat4 Result;

    for (int i = 0; i < 4; i++)
    {
        for (int j = 0; j < 4; j++)
        {
            float Sum = 0.0f;
            for (int e = 0; e < 4; e++)
            {
                Sum += A.Elements[j + e*4] * B.Elements[e + i*4];
            }
            Result.Elements[j + i * 4] = Sum;
        }
    }

    return(Result);
}

vec3 CrossProduct(vec3 A, vec3 B)
{
    vec3 Result;
    Result.x = A.y*B.z - A.z*B.y;
    Result.y = A.z*B.x - A.x*B.z;
    Result.z = A.x*B.y - A.y*B.x;

    return(Result);
}
Community
  • 1
  • 1
justvg
  • 56
  • 5
  • I have a feeling the answer here may be applicable - https://stackoverflow.com/questions/17717600/confusion-between-c-and-opengl-matrix-order-row-major-vs-column-major – Mark Ingram Feb 05 '19 at 17:57
  • Why do you think the matrix is not working as needed? Did you make this conclusion based on the rendering results? Do you remember to inverse the camera matrix before using it? Do not forget that for orthogonal matrices the result of transposition and inversion is identical, it sometimes creates confusion. – Dmytro Dadyka Feb 05 '19 at 18:43
  • @MarkIngram, it helped a little bit, but it doesn't answer the question fully. I've edited the question to clarify. – justvg Feb 05 '19 at 20:17
  • @DmytroDadyka yes, I made the conclusion based on the rendering results. What are you talking about when say "Do you remember to inverse the camera matrix before using it?"? Why do I need to do it? – justvg Feb 05 '19 at 20:17

0 Answers0