3

I'm making a game and I'm actually on the generation of the map.

The map is generated procedurally with some algorithms. There's no problems with this.

The problem is that my map can be huge. So I've thought about cutting the map in chunks.

My chunks are ok, they're 512*512 pixels each, but the only problem is : I have to generate a texture (actually a RenderTexture from the SFML). It takes around 0.5ms to generate so it makes the game to freeze each time I generate a chunk.

I've thought about a way to fix this : I've made a kind of a threadpool with a factory. I just have to send a task to it and it creates the chunk.

Now that it's all implemented, it raises opengl warnings like :

"An internal OpenGL call failed in RenderTarget.cpp (219) : GL_INVALID_OPERATION, the specified operation is not allowed in the current state".

I don't know if this is the good way of dealing with chunks. I've also thought about saving the chunks into images / files, but I fear that it take too much time to save / load them.

Do you know a better way to deal with this kind of "infinite" maps ?

tho
  • 130
  • 8
  • OpenGL calls should only be made from a single thread. although it is technically possible to use multiple threads with OpenGL, it does not speed up anything (unless you have 2 or more graphics cards). – Kent Dec 16 '13 at 01:55
  • @Builer_K Actually, it's quite possible to use multiple threads to improve OpenGL performance on only a single GPU, if you're spending a lot of effort moving data between the GPU and the CPU. See http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-AsynchronousBufferTransfers.pdf – Jherico Dec 16 '13 at 04:31
  • If you want to have an infinite amount of chunks then you **NEED** to save/load them from time to time, in the end you simply won't have more memory to use for the chunks. – vallentin Dec 16 '13 at 08:10

2 Answers2

2

things to try:

  • make your chunks smaller
  • generate the chunks in a separate thread, but pass to the gpu from the main thread
  • pass to the gpu a small piece at a time, taking a second or two
Kent
  • 713
  • 2
  • 8
  • 19
  • +1. I seem to recall reading your 2nd point elsewhere before. In searching for a reference, I came across this thread which seems to confirm what I'd recalled: http://www.opengl.org/discussion_boards/showthread.php/164091-Specific-Multi-Threading-usage-in-OpenGL. I also happened across this awesome (yet, off-topic) thread on GL vs DirectX. The top-rated answer should come with a hard-cover :) http://programmers.stackexchange.com/questions/60544/why-do-game-developers-prefer-windows – enhzflep Dec 16 '13 at 02:07
2

It is an invalid operation because you must have a context bound to each thread. More importantly, all of the GL window system APIs enforce a strict 1:1 mapping between threads and contexts... no thread may have more than one context bound and no context may be bound to more than one thread. What you would need to do is use shared contexts (one context for drawing and one for each worker thread), things like buffer objects and textures will be shared between all shared contexts but the state machine and container objects like FBOs and VAOs will not.

Are you using tiled rendering for this map, or is this just one giant texture?

If you do not need to update individual sub-regions of your "chunk" images you can simply create new textures in your worker threads. The worker threads can create new textures and give them data while the drawing thread goes about its business. Only after a worker thread finishes would you actually try to draw using one of the chunks. This may increase the overall latency between the time a chunk starts loading and eventually appears in the finished scene but you should get a more consistent framerate.

If you need to use a single texture for this, I would suggest you double buffer your texture. Have one that you use in the drawing thread and another one that your worker threads issue glTexSubImage2D (...) on. When the worker thread(s) finish updating their regions of the texture you can swap the texture you use for drawing and updating. This will reduce the amount of synchronization required, but again increases the latency before an update eventually appears on screen.

Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • Oh I haven't tried to instantiate the RenderTexture before sending it to the threads, might be a good idea ! – tho Dec 16 '13 at 03:40
  • That still will not work if you do not implement context sharing. The term `Render` in `RenderTexture` implies you are actively going to use OpenGL to render into it. You would either have to share contexts between threads, or unbind the context from the drawing thread and give it to worker thread before doing these things (and this is almost never practical). My point was actually that if you use shared render contexts, then the worker threads can actually create the textures for use in the main drawing thread. – Andon M. Coleman Dec 16 '13 at 03:51
  • But honestly, **0.5** ms is not an unreasonable amount of time for an update, you have a full **16** ms to work with if you want to achieve 60 FPS. It does become unreasonable if your software is written in a way that it has to do these updates serially instead of in parallel or limited to a few-updates-per-frame. – Andon M. Coleman Dec 16 '13 at 03:52
  • Well, that was more 5ms than 0.5, sorry for this. I think I'll thread the graphical engine. – tho Dec 16 '13 at 13:51