27

I'm working on a small graphics engine using OpenGL and I'm having some issues with my translation matrix. I'm using OpenGL 3.3, GLSL and C++. The situation is this: I have defined a small cube which I want to render on screen. The cube uses it's own coordinate system, so I created a model matrix to be able to transform the cube. To make it myself a bit easier I started out with just a translation matrix as the cube's model matrix and after a bit of coding I've managed to make everything work and the cube appears on the screen. Nothing all to special, but there is one thing about my translation matrix that I find a bit odd.

Now as far as I know, a translation matrix is defined as follows:

1, 0, 0, x
0, 1, 0, y
0, 0, 1, z
0, 0, 0, 1

However, this does not work for me. When I define my translation matrix this way, nothing appears on the screen. It only works when I define my translation matrix like this:

1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
x, y, z, 1

Now I've been over my code several times to find out why this is the case, but I can't seem to find out why or am I just wrong and does a translation matrix needs to be defined like the transposed one here above?

My matrices are defined as a one-dimensional array going from left to right, top to bottom.

Here is some of my code that might help:

//this is called just before cube is being rendered
void DisplayObject::updateMatrices()
{
    modelMatrix = identityMatrix();
    modelMatrix = modelMatrix * translateMatrix( xPos, yPos, zPos );

    /* update modelview-projection matrix */
    mvpMatrix = modelMatrix * (*projMatrix);
}

//this creates my translation matrix which causes the cube to disappear
const Matrix4 translateMatrix( float x, float y, float z )
{
    Matrix4 tranMatrix = identityMatrix();

    tranMatrix.data[3]  = x;
    tranMatrix.data[7]  = y;
    tranMatrix.data[11] = z;

    return Matrix4(tranMatrix);
}

This is my simple test vertex shader:

#version 150 core

in vec3 vPos;

uniform mat4 mvpMatrix;

void main()
{
    gl_Position = mvpMatrix * vec4(vPos, 1.0);
}

I've also did tests to check if my matrix multiplication works and it does. I * randomMatrix is still just randomMatrix

I hope you guys can help. Thanks

EDIT:

This is how I send the matrix data to OpenGL:

void DisplayObject::render()
{
    updateMatrices();

    glBindVertexArray(vaoID);
    glUseProgram(progID);
    glUniformMatrix4fv( glGetUniformLocation(progID, "mvpMatrix"), 1, GL_FALSE, &mvpMatrix.data[0] );
    glDrawElements(GL_TRIANGLES, bufferSize[index], GL_UNSIGNED_INT, 0);
}

mvpMatrix.data is a std::vector:

Krienie
  • 611
  • 1
  • 6
  • 14

2 Answers2

38

For OpenGL

1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
x, y, z, 1

Is the correct Translation Matrix. Why? Opengl Uses column-major matrix ordering. Which is the Transpose of the Matrix you initially presented, which is in row-major ordering. Row major is used in most math text-books and also DirectX, so it is a common point of confusion for those new to OpenGL.

See: http://www.mindcontrol.org/~hplus/graphics/matrix-layout.html

  • 2
    Well that explains a lot then. Thanks a lot :) After a few minutes of searching for opengl and column-major ordering I also found this link, which tells the same thing. [http://www.opengl.org/archives/resources/faq/technical/transformations.htm – Krienie Nov 08 '12 at 17:31
  • 1
    That is the one! I was looking for that page in a bit of a rush and didn't find it, but still wanted to back up my claim! – Stephan van den Heuvel Nov 08 '12 at 17:46
  • 1
    Your diagram is misleading and incorrect. The translation matrix would look the same regardless of language, and yours is not that matrix. Instead, just read matrices top-to-bottom, not left to right, in OpenGL, when array notation is involved. –  Nov 09 '12 at 18:24
  • 6
    I am sorry if it is misleading. My diagram is supposed to represent the translation matrix as it must sit *in memory*. Read as if it were an array declaration. I can make that more clear if that would help :) The concept of *row-major* and *column-major* matrices are not made up. It actually depends if you are using vectors that are 4x1 or 1x4 as this changes the way the matrix would need to be written. – Stephan van den Heuvel Nov 09 '12 at 19:06
  • 1
    Memory isn't laid out two-dimensionally. You may have learned the concept of your representation matching hardware, but I have no such concept of arrays being written out left-to-right, then top-to-bottom. I don't see why you should overcomplicate the situation by perverting matrices, which have over a century's history of convention over computer science, in order to get them to match up with a faulty abstraction. –  Nov 12 '12 at 19:02
  • 4
    You do know that matrices can be written both ways though, right? It depends on if you are using column vectors versus row vectors. Column vectors are more prevalent now, but both representations have ben used extensively and are mathematically valid. I am not 'perverting' anything, as both notations have been used for many years ... the notation for memory being left to right top to bottom is an artifact of how most languages declare arrays. Like int c[] = [1, 2, 3, 4, 5, 6, 7, 8]; – Stephan van den Heuvel Nov 12 '12 at 19:41
  • @Jessy see 9.005 http://www.opengl.org/archives/resources/faq/technical/transformations.htm – kcbanner Nov 12 '12 at 19:43
  • This is an exceptionally confusing answer. Stating that OpenGL uses "column-major" ordering is factually wrong. There is no such thing. What has to be noted is 1) memory layout - basis vectors are laid out in memory SEQUENTIALLY, 2) the order of multiplication in openGL is matrix multiplied by a column vector (instead of row vector by matrix). Taking that into account the notation choice in OpenGL documentation makes perfect sense. – JBeurer Nov 24 '16 at 03:01
  • The illustration is absolutely wrong. As in what's illustrated is not the actual translation matrix, what's displayed here rather is it's layout in memory. – JBeurer Nov 24 '16 at 03:08
  • And when I say "order of multiplication" in OpenGL is matrix multiplied by a column vector, I'm referring to fixed-pipeline OpenGL. If you use your own vertex shader, you can have it either way and the only thing you have to pay attention to is the memory layout. – JBeurer Nov 24 '16 at 03:16
  • @JBeurer How are the basis vectors laid out sequentially when [3][0], [3][1] and [3][2], which are sequential in memory, and are the translation values of the basis vectors? Also how is multiplying (in OpenGL's case) ([0][0], [1][0], [2][0], [3][0]) by the vector more efficient? these aren't sequential and so I thought would be slower. – Zebrafish Jan 07 '18 at 02:53
3

You cannot swap matrices in a matrix multiplication, so A*B is different from B*A. You have to transpose B before swapping the matrices.

A * B = t(B) * A

try

void DisplayObject::updateMatrices()
{
    modelMatrix = identityMatrix();
    modelMatrix = translateMatrix( xPos, yPos, zPos ) * modelMatrix;

    /* update modelview-projection matrix */
    mvpMatrix = modelMatrix * (*projMatrix);
}
Gianluca Ghettini
  • 11,129
  • 19
  • 93
  • 159
  • 1
    You are right about the swapping, but that shouldn't matter if I multiply the translation matrix with the identity matrix, which I do. And to be sure: I've tested it and it doesn't work. Still no joy :( – Krienie Nov 08 '12 at 16:51