3

Displaying images on a computer monitor involves the usage of a graphic API, which dispatches a series of asynchronous calls... and at some given time, put the wanted stuff on the computer screen.

But, what if you are interested in knowing the exact CPU time at the point where the required image is fully drawn (and visible to the user)?

I really need to grab a CPU timestamp when everything is displayed to relate this point in time to other measurements I take.

Without taking account of the asynchronous behavior of the graphic stack, many things can get the length of the graphic calls to jitter:

  • multi-threading;
  • Sync to V-BLANK (unfortunately required to avoid some tearing);
  • what else have I forgotten? :P

I target a solution on Linux, but I'm open to any other OS. I've already studied parts of the xvideo extension for X.org server and the OpenGL API but I havent found an effective solution yet.

I only hope the solution doesn't involve hacking into video drivers / hardware!

Note: I won't be able to use the recent Nvidia G-SYNC thing on the required hardware. Although, this technology would get rid of some of the unpredictable jitter, I think it wouldn't completely solve this issue.


OpenGL Wiki suggests the following: "If GPU<->CPU synchronization is desired, you should use a high-precision/multimedia timer rather than glFinish after a buffer swap."

Does somebody knows how properly grab such a high-precision/multimedia timer value just after the swapBuffer call is completed in the GPU queue?

Jonathan
  • 331
  • 2
  • 6
  • 1
    You have forgotten that GPUs are virtualized resources in modern operating systems. Microsoft Windows has a pretty sophisticated scheduler in WDDM (Windows Vista+'s display driver model), your application can be pre-empted by something that has higher GPU priority (e.g. the Desktop Window Manager). If you are truly writing software with real-time (in the traditional **latency / deadline sense** and not high framerate as this term is often simplified to mean) GPU requirements, then this is going to get in your way :-\ – Andon M. Coleman Nov 08 '13 at 00:40
  • 1
    Yes, that's why I favored Linux for this application. Virtualization could still be a problem but at least, it's not a black box. To minimize this problem I disabled desktop composting extensions (Composite and AIGLX for ATI) to avoid indirect rendering. – Jonathan Oct 03 '14 at 17:40

1 Answers1

3

Recent OpenGL provides sync/fence objects. You can place sync objects in the OpenGL command stream and later wait for them to get passed. See http://www.opengl.org/wiki/Sync_Object

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Thank you for your answer. Although as with a glFinish call, the only option currently available with fences is waiting for them in a blocking operation. Placing a fence after a swapBuffer call would unfortunately block the render loop, discarding any advantages of double buffering. – Jonathan Oct 03 '14 at 17:27
  • @Jonathan: Actually OpenGL calls will only block if the pipeline stalled and full. Stalling happens only when a synchronization point for an operation on the *back buffer* is reached, before the buffer swapped happened. The key words here are **"the back buffer"**, you can perfectly fine continue rendering stuff to framebuffers other than the to-be-swapped surface, like an FBO render buffer or texture attachment. – datenwolf Oct 03 '14 at 17:45