2

I'm attempting to create a game engine using LWJGL3 (Open GL 4.1, NVIDIA-10.8.14) in Java, but I seem to have run into a *small* problem...

When I attempt to translate the square, instead of actually moving as expected, it ends up stretching.

enter image description here

LWJGL doesn't have a math class, so I had to create my own Matrix and TransformationMatrix classes. I've combined them into one snippet to reduce the size of this post.

My assumption is that the error lies somewhere in the TransformationMatrix's translate() function

public TransformationMatrix translate(Vector3f vector){
    super.m30 += ((super.m00 * vector.getX()) + (super.m10 * vector.getY()) + (super.m20 * vector.getZ()));
    super.m31 += ((super.m01 * vector.getX()) + (super.m11 * vector.getY()) + (super.m21 * vector.getZ()));
    super.m32 += ((super.m02 * vector.getX()) + (super.m12 * vector.getY()) + (super.m22 * vector.getZ()));
    super.m33 += ((super.m03 * vector.getX()) + (super.m13 * vector.getY()) + (super.m23 * vector.getZ()));

    return this;
}

I've played around with it, but I haven't been able to get any positive results from it.

public class Matrix4f {

    public float 
        m00, m01, m02, m03,
        m10, m11, m12, m13,
        m20, m21, m22, m23,
        m30, m31, m32, m33;

    public FloatBuffer store(FloatBuffer buffer){
        buffer.put(this.m00);
        buffer.put(this.m10);
        buffer.put(this.m20);
        buffer.put(this.m30);
        buffer.put(this.m01);
        buffer.put(this.m11);
        buffer.put(this.m21);
        buffer.put(this.m31);
        buffer.put(this.m02);
        buffer.put(this.m12);
        buffer.put(this.m22);
        buffer.put(this.m32);
        buffer.put(this.m03);
        buffer.put(this.m13);
        buffer.put(this.m23);
        buffer.put(this.m33);

        return buffer;
    }


    ////////////////////////////
    //                        //
    //  TRANSFORMATION MATRIX //
    //                        //
    ////////////////////////////   


    public class TransformationMatrix extends Matrix4f{

    public TransformationMatrix(Vector3f translation){
        this.setIdentity();
        this.translate(translation);
    }

    public TransformationMatrix setIdentity(){
        super.m00 = 1.0f;
        super.m01 = 0.0f;
        super.m02 = 0.0f;
        super.m03 = 0.0f;
        super.m10 = 0.0f;
        super.m11 = 1.0f;
        super.m12 = 0.0f;
        super.m13 = 0.0f;
        super.m20 = 0.0f;
        super.m21 = 0.0f;
        super.m22 = 1.0f;
        super.m23 = 0.0f;
        super.m30 = 0.0f;
        super.m31 = 0.0f;
        super.m32 = 0.0f;
        super.m33 = 1.0f;

        return this;
    }

    public TransformationMatrix translate(Vector3f vector){
            super.m30 += ((super.m00 * vector.getX()) + (super.m10 * vector.getY()) + (super.m20 * vector.getZ()));
            super.m31 += ((super.m01 * vector.getX()) + (super.m11 * vector.getY()) + (super.m21 * vector.getZ()));
            super.m32 += ((super.m02 * vector.getX()) + (super.m12 * vector.getY()) + (super.m22 * vector.getZ()));
            super.m33 += ((super.m03 * vector.getX()) + (super.m13 * vector.getY()) + (super.m23 * vector.getZ()));

            return this;
    }
}

My vertex shader contains

#version 400 core

in vec3 position;
in vec2 texture;
out vec2 texture_coords;
uniform mat4 transformationMatrix;

void main(void){
    gl_Position = transformationMatrix * vec4(position, 1.0);
    texture_coords = texture;
}

And fragment shader contains

#version 400 core

//Variables...

void main(void){
    out_color = texture(textureSampler, texture_coords);
} 

The square is rendered using the following points and indices

//stored in the vertex shader's "position"
float[] positions = {
    -0.5f,0.5f,0.0f,    
    -0.5f,-0.5f,0.0f,   
    0.5f,-0.5f,0.0f,    
    0.5f,0.5f,0.0f,     
};

//bound using
//
//int id = GL15.glGenBuffers();
//GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, id);
//GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER, toIntBuffer(indices), GL15.GL_STATIC_DRAW);
int[] indices = {
    0,1,3,  
    3,1,2,
};

//stored in the vertex shader's "texture"
float[] textureCoords = {
    0,0,
    0,1,
    1,1,
    1,0,
};

and then translated using TransformationMatrix.translate() (In the above gif, it's being translated by <0.0f, 0.01f, 0.0f>). The square is rendered using

public void render(){
    GL30.glBindVertexArray(modelID);
    GL20.glEnableVertexAttribArray(0);
    GL20.glEnableVertexAttribArray(1);


    //position begins at <0, 0, 0>, and is incremented by <0, 0.01f, 0>
    //every frame in the above gif
    TransformationMatrix matrix = new TransformationMatrix(
            position
    );

    //load the transformation matrix to the vertex shader
    FloatBuffer buffer = matrix.store(BufferUtils.createFloatBuffer(16));
    buffer.flip();
    //location being the location of the "transformationMatrix" in the
    //vertex shader
    GL20.glUniformMatrix4fv(location, false, buffer);

    GL13.glActiveTexture(GL13.GL_TEXTURE0);
    GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID));
    //vertexCount is indices.length (6)
    GL11.glDrawElements(GL11.GL_TRIANGLES, vertexCount, GL11.GL_UNSIGNED_INT, 0); 

    GL20.glDisableVertexAttribArray(1);
    GL20.glDisableVertexAttribArray(0);
    GL30.glBindVertexArray(0);
}

So far, I've tried playing around with TransformationMatrix.translate(), and I've double, triple, and quadruple checked that I have the correct code in these classes.

One thing I noticed is that changing the translate() method to add to m03, m13, and m23 instead of m30, m31, and m32 respectively makes the square begin translating up, scale down, and then begin translating down

enter image description here

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

is called before rendering each frame

Jojodmo
  • 23,357
  • 13
  • 65
  • 107
  • Make sure your matrix storage layout matches the one you expect. You have not shown how you upload the matrices to the GL, but you are using matrix * vector order in the shader, so in the default case of using `GL_FALSE` for the _transpose_ parameter for `glUniformMatrix`, the matrix should be expected to be in column major order. What's suspicious in your matrix member names is that you seem to use "mColumnRow", while classical mathematical notation would be "mRowColumn". – derhass Dec 27 '15 at 00:28
  • @derhass, lol; local_to_world * local_position was my bug; switched the order and it works now. Thanks! – Artorias2718 Sep 27 '18 at 05:51

1 Answers1

2

It's more to do with your matrices. Remember that translations in 3D should be the 4th column of the 1st row in their x, y and z values. You're inputting each value more than once, which is of course going to alter the translation. You're overcomplicating transformations; all you need to do is get the location of your matrix of 4th col, 1st row, 2nd row and 3rd row and then increment their values based on the x, y and z values respectively. Using your matrix:

    m00, m01, m02, m03,
    m10, m11, m12, m13,
    m20, m21, m22, m23,
    m30, m31, m32, m33;

m03, m13 and m23 would be where you want your translations for x, y and z values respectively. To translate something 100 in x, 200 in y and 300 in z, you'd do this:

    1.0, 0.0, 0.0, 100,
    0.0, 1.0, 0.0, 200,
    0.0, 0.0, 1.0, 300,
    0.0, 0.0, 0.0, 1.0;

What your translation function is doing, I do not know. It looks like you're creating a translation matrix and multiplying it by the current, which makes your code extremely unreadable. The fact of the matter is that this isn't required because your code already returns a matrix, which implies this kind of usage:

playerPosition = translate(playerPosition);

I don't see anything wrong with your rendering code, nor your shaders. So, it must have something to do with your matrices. It's just a case of establishing your majorness. Remember, OpenGL reads everything column major. So, that means OpenGL will read m00 then m10 then m20 etc. It creates its own matrix out of those values.

Poriferous
  • 1,566
  • 4
  • 20
  • 33
  • 1
    Yep, the issue was with values being stored in the wrong columns. When loading the transformation matrix, I changed the `transpose` paramater of `GL20.glUniform4fv` to `true` (in the last code snippet), so it looks like this now `GL20.glUniformMatrix4fv(location, true, buffer)`. Thank you! – Jojodmo Dec 27 '15 at 01:21
  • 1
    @Jojodmo As much as I appreciate that solution, it's not practical by the way you create your matrices. You would save some CPU power by making your matrices more column major. Also, having OpenGL transpose matrices for you also costs you some GPU power. Regardless, these are the things to take into account when you are making a game engine. I wish all the best with it. Mark my answer as solved, and I'm glad I've been of help! – Poriferous Dec 27 '15 at 01:27