I've created something of a simplistic renderer on my own using OpenGL ES 2.0. Essentially, it's just a class for rendering quads according to a given sprite texture. To elaborate, it's really just a single object that accepts objects that represent quads. Each quad object maintains a a world transform and object transform matrix and furnishes methods for transforming them over a given number of frames and also specifies texture offsets into the sprite. This quad class also maintains a list of transform operations to execute on its matrices. The renderer class then reads all of these properties from the quad and sets up a VBO to draw all quads in the render list.
For example:
Quad q1 = new Quad();
Quad q2 = new Quad();
q1->translate(vector3( .1, .3, 0), 30); // Move the quad to the right and up for 30 frames.
q2->translate(vector3(-.1, -.3, 0), 30); // Move the quad down and to the left for 30 frames.
Renderer renderer;
renderer.addQuads({q1, q2});
It's more complex than this, but you get the simple idea.
From the implementation perspective, on each frame, it transforms the base vertices of each object according to instruction, loads them all into a VBO including info on alpha value, and passes to a shader program to draw all quad at once.
This obviously isn't what I would call a rendering engine, but performs a similar task, just for rendering 2D quads instead of 3D geometry. I'm just curious as to whether I'm on the right track for developing a makeshift rendering engine. I agree that in most cases it's great to use an established rendering engine to get started in understanding them, but from my point of view, I like to have something of an understanding of how things are implemented, as opposed to learning something prebuilt and then learning how it works.