Is it more efficient for me to render individual GL_Triangle elements using glDrawElements or draw the elements to a texture then render it to another texture to build my scene using a framebuffer? Either way this will happen every time the graphics object is modified and the parent objects containing this object will also have to be updated each time this happens. Thanks I'm just creating some higher level objects to easily draw large scenes efficiently. The idea is to have a graphics tree and only redraw parts of the tree as required down to the root for an individual cube within my scene.
-
also any suggestions how I can detect when the entire scene from the chosen perspective is opaque so I don't have to render any more more distant cubes that make up the scene? "Think minecraft"?? – Harry Patrick Jun 01 '15 at 14:47
2 Answers
The process of doing updates only on 'damaged' parts of the screen is very tricky to do properly (and efficiently) in an interactive application (eg. "Minecraft"-like game), where the 'look-at' direction changes frequently. Likely it will never be more efficient or memory constrained as simply rendering the entire scene every frame.
Consider the (frequent) case of the camera transformation changing. Every pixel on the screen would need to be updated. In this case, there is no point in detecting 'damaged' portions - the whole screen is damaged, and any time spent during detection is wasted.
In the case where the camera does not change, the calculation for what needs to be rendered can be difficult and expensive. You would need to know which objects moved, where they moved to, and the objects they overlapped in both positions (all screenspace). Screen-space can be particularly difficult to compute, if you have vertex shaders which translate vertices in programmatic ways (eg. think about a GPU-only particle system simulation).
My suggestion would be to look for optimization opportunities elsewhere.

- 9,741
- 2
- 37
- 78
You can't. If you move the camera the scene needs to be redrawed (wich is what modern realtime computer graphics is about).
Note that there are no exceptions if you have a perspective projection (there is no particular motion that helps you save from redrawing. Well if you flip the camera upside or left/right you could just invert the final image, but that's not a "usefull" motion anyway).
I suggest to learn deeply what happens behind the scenes with a GPU so that you'll feel better what is possible and not possible with a GPU. (It could be proven, in terms of Matrix Images I suppose)
If you accept "distortions", then you could use the impostors tecnique. Assume you render batch of 100x100x100 cubes at once. You could replace the 100x100x100 cubes with one big cube (wich have only 3 visible faces, and you have to manually update those 3 faces). It will look like a prismatic cube if you move fast you'll have the impression that horizon is filled with giant crystals (wich could be a nice effect.. if wanted).
But I higly suspect that would be slower than rendering the cube batch. (try and you will see)

- 7,961
- 3
- 35
- 69