For the last while I've been putting together an OpenGL program, and I've reached the point where I'm coding in my transformation matrices (the model transform, camera transform, and perspective transform). So far, I've been computing my model transform and sending to a uniform and multiplying it in vertex shader. Recently I added the camera and perspective transforms to the matrix that is sent to the uniform in the shader.
But now, a lot of things aren't working and there's a few things I can't understand:
Firstly, this webpage says that OpenGL automatically divides everything by the Z component of position (for use with the perspective transform), but I can't figure out where this happens, and secondly:
There are a number of resources that mention OpenGL functions such as glFrustumf(), glMatrixMode(), glPushMatrix(), glPopMatrix(), glLoadIdentity(), gRotatef() etc. All these functions that seem to pass and modify matrices inside OpenGL, where I wouldn't need to bother with them in the vertex shader at all.
So now I'm thoroughly lost as to how transformations are done in OpenGL. Do we compute them ourselves and multiply them in the vertex shader, or do we send parameters to OpenGL and let it do all the work behind the API, or something else completely different?
Thanks