glFinish
is required to be called if you need to synchronize the GL state with either Host data or with GL↔CL Interop calls. This (or a similar call) will be implicitly called when you swap buffers, so in most normal uses of OpenGL, there's no need to call it.
In my personal experience, one scenario I used glFinish
was when I was forcing GL to draw an extremely graphically taxing image, where rendering the whole image was taking upwards of 60 seconds—in a best-case scenario this leads to the OS resetting the graphics driver and causing your program to crash, in a worst-case scenario it causes a Blue-Screen-Of-Death, and I've had my fair share of those! Submitting that task normally would cause the GL to try to bundle everything together, and the OS will often assume the graphics driver has crashed if it doesn't respond for more than a few seconds. So I rewrote the render loop to only batch small sections of the image at once, with a loop that (more-or-less) looked like this:
for(int x = 0; x < image_width; x += 32) {
for(int y = 0; y < image_height; y += 32) {
glViewport(x, y, 32, 32);
glDrawArrays(GL_QUADS, 0, 4);
glFinish();
}
}
So if you have a use-case where you're worried about an individual draw call taking too long, you can split it up, and prevent the GL from batching subsequent draw calls together by splitting them up with the use of glFinish
.
But my circumstances were pretty unique. In general, you shouldn't expect to need to do this.
EDIT:
In the specific case of glBufferData
, the driver buffers the data before returning control to the Host code, so there's no need to worry about synchronization with the host.