0

I read about DynamicVertexBuffer, and how it's supposed to be better for data that changes often. I have a world built up by cubes, and I need to store the cubes' vertices in this buffer to draw them to the screen.

However, not all cubes have vertices (some are air, which is transparent) and not all faces of the cubes need to be drawn either (they are facing each other), so how do I keep track of what vertices are stored where in the buffer? Also, certain faces need to be drawn last, namely the ones with transparency in them (like glass or leaves), and these faces also need to be drawn in a back-to-front order to not mess up the alpha blending.

If all of these vertices are stored arbitrarily in this buffer, how do I know what vertices are where?

Also, the number of vertices can change, but the DynamicVertexBuffer doesn't seem very dynamic to me, since I can't change it's size at all. Do I have to recreate the buffer every time I need to add or remove faces?

Bevin
  • 952
  • 6
  • 19
  • 35

1 Answers1

4

Sounds like you are approaching this in the wrong way - assuming you have anything more than a trivial number of cubes in your world. You should store the world (and it's cubes) in a custom data structure that lets you rapidly determine which cubes (and faces) are visible based on the rules of your world from a given point when looking in a given direction.

Then each time you render a scene generate batches of vertex buffers of just these faces. So don't use vertex buffers as the basis for storing the entire geometry of your world. Vertex buffers are a rendering tool, not a world scene graph tool.

These kind of large scale visibility issues are much faster run in code than by the GPU. For example if you are sat at the origin, looking +x, you can immediately ignore all cubes in the -ve x direction, this is a very simple example.

For a more complete example search on oct-tree rendering. This kind of rendering would match your world layout quite nicely.

Final tip - when I say generate batches of vertex buffers - I mean batch you cubes together in ways that minimize changes in the state of the GPU (e.g. same texture, same shader etc). Minimizing changes to the state of the GPU is key to optimizing the rendering - once you've gone as far as you can with culling faces from the render in the first place.

James Gaunt
  • 14,631
  • 2
  • 39
  • 57
  • I think you meant "ignore all faces in the -ve x direction". Anyway, thanks for this info. The textures are all taken from the same texture map, so that isn't a problem. But say that I add a cube to the world, and this cube is placed somewhere so it needs rendering. Where do I add the vertices? How can I inform the GPU about this new cube, without modifying the size of the place where all other vertices are stored? I will look up octree rendering, I have seen quadtrees and octrees mentioned. – Bevin Jan 15 '11 at 23:21
  • I did mean ignore all cubes in the -ve x direction, assuming that cubes are a higher level of the data structure than faces. If you can ignore a cube then you can automatically ignore all 6 faces - so it's quicker to drop cubes first, then when you have you set of potentially visible cubes you try to drop faces. – James Gaunt Jan 15 '11 at 23:29
  • 1
    You're still thinking of vertex buffers as persistent across frames. In most engines they aren't. You populate the vertex buffers from scratch every frame. Sure you can allocate space - but you rebuild the data. The potentially visible set of faces will change every frame. If you think about it you might have a world with millions of cubes, but each frame you want to render as few faces as possible - maybe just hundreds - the trick is in getting the size of the vertex buffers as small as possible each and every frame. – James Gaunt Jan 15 '11 at 23:31
  • There may be some exceptions to this for small pieces of mobile or geometry - for example a moving actor - where it might make sense to keep all the vertices in a buffer and let the shader do the hard work. But for large sprawling world geometry like you cubes you should probably approach the problem differently. – James Gaunt Jan 15 '11 at 23:33
  • Are these cubes regular - all the same size - all axis aligned at the same spacing? In this case you wouldn't even have vertices in your custom data structure - you don't need them until you come to render that particular cube. In this case there are loads of optimisations possible - don't pass cube faces in the vertex buffer - just pass the centre of each cube as one vertex and have the shader work out the visible faces. – James Gaunt Jan 15 '11 at 23:36
  • @James Gaunt - I think I get it. Well, I already have culling for faces that face each other (and only if both cubes are opaque), and cubes that are completely surrounded are not drawn at all. What I don't understand is how to make the GPU aware of the changes in vertices. Where and how do I actually store the vertices? EDIT: Yikes, you added a lot of stuff there. – Bevin Jan 15 '11 at 23:37
  • See answer just above yours - I think you should be calculating them on the fly (it's very cheap if the are axis aligned), and possibly calculating them on the GPU - you'll need to try it out to see if that is quicker. – James Gaunt Jan 15 '11 at 23:38
  • Yes, all of the cubes are the same size (except one, a half-block), and they all have integer positions in the space. I know absolutely nothing of neither shaders nor HLSL. Can the shader really deduce what faces are visible just from a single vertex? how does it know of the surrounding cubes? – Bevin Jan 15 '11 at 23:39
  • You might need to experiment to determine what is fastest. You can pass one vertex to a shader, which is basically saying draw a cube around this point. The shader knows the size and orientation of the cube so it can work out the faces and eliminate those pointing away from the camera. You lose the surrounding cube culling, but you can get it back by also passing some flags - assuming these are precalculated so cheaply available. What works best all depends on how dynamic the scene is. But if you don't want to get into HLSL just calculate the faces in code only when you need them. – James Gaunt Jan 15 '11 at 23:44
  • Alright, I'll keep away from HLSL for the time being. But I still don't understand where I am supposed to store the vertices that need to be sent to the GPU. Do I start with a VertexBuffer of arbitrary size, and then resize it and reallocate all of the vertices when needed? – Bevin Jan 15 '11 at 23:51
  • Your scene graph code determines the potentially visible set of faces, create a dynamic vertex buffer with enough space for those faces, calculate the faces and set them in the buffer and render. Next frame if you need more faces increase the size of the buffer - otherwise reuse it. After a few frames the buffer will grow to the appropriate size. If you had an idea of this upfront you can preallocate it. To be clear - you don't store vertices in the vertex buffer across frames, the data in the vertex buffer is overwritten every frame. – James Gaunt Jan 15 '11 at 23:56
  • But I have to keep the VertexBuffer object, and recreate it ONLY if the number of vertices I have doesn't fit? – Bevin Jan 15 '11 at 23:59
  • Yes, in this case use a dynamic vertex buffer. But of course keep the buffer across frames, there is nothing to be gained from freeing it and reallocating the memory every frame. You are just reusing the memory though - you are replacing the data in that memory. – James Gaunt Jan 16 '11 at 00:03
  • Thank you. This information will hopefully solve my issues, and it was very informative. I marked it as the answer, even though I haven't tried it yet(I can't at the moment). – Bevin Jan 16 '11 at 00:12
  • No problem and thanks for ticking. I'm sure you'll have a lot of fun trying it out. Of course it's not the answer, just an answer, with this kind of thing there is always a way to make it even faster. Good luck. – James Gaunt Jan 16 '11 at 00:14