2

I currently have an application I'm writing in c# (using .NET) that requires me to start a timer as soon as a user sees an image on screen up until they respond with a key press.

Now I realise that practically this is very difficult given the monitor input lag and response time, the time the keyboard takes to physically send the message, the OS to process it, etc.

But I'm trying my best to reduce it down to mostly a constant error (the response time results will be used to compare one user to the next so a constant error isn't really an issue). However annoying hurdle is the variable caused by the monitor refresh rate, as I gather when my onPaint message is called and done with, it doesn't mean the image has actually been processed and sent from the graphics buffer?

Unfortunately time restrictions and other commitments would realistically restrict me to continuing this task in c# for windows.

So what I was wondering was if either handling all the drawing in OpenGL or DirectX or better still for me if it is possible to just using either OpenGL or DirectX to create an event when the screen is updated?

Another suggestion given to me previously was regarding V-Sync, if I switch this off is the image sent as soon as it is drawn? as opposed to sending images at a set rate synchronised to the monitor refresh rate?

genpfault
  • 51,148
  • 11
  • 85
  • 139
aceaudio
  • 23
  • 1
  • 4
  • 1
    What precision do you need for your timer? – Martin Delille Jun 10 '12 at 21:42
  • as precise as possible if I could achieve within 1ms that would be ideal! I'm currently using the Stopwatch which as I understand it is just a wrapper for the precision timer, I am also running the timer in a separate thread elevated to high priority (with the application running at the same priority but other threads dropped to normal, the application will run with an absolute minimum of other applications and processes in the background) – aceaudio Jun 10 '12 at 22:11
  • Anything worse than 10ms may very well become quite an issue! – aceaudio Jun 10 '12 at 22:18
  • You are measuring human response? Anyways, use the `System.Diagnostics.Stopwatch` class, start it when you display the image (using whatever code that will cause it to show), then stop it when the user hits a button. How much precision do you really need? The error from actual display on a 60fps monitor, is about 7msecs on average from the time you initiate the display in code (yes this is a simplification). – Chris O Jun 10 '12 at 22:40
  • Thanks for the reply Chris indeed it is human response for a psychology experiment, and I'm doing all of that so far. The kind of response time expected isn't *very* fast BUT the difference in time between different variables is, meaning a variable of 0-17ms (@60hz) could certainly have an impact on the results. I realise that on average it's not a huge amount and with enough data collected this shouldn't be a massive issue but if there is a way I could synchronise the drawing of each screen to the refresh rate of the display in some way to increase accuracy then I think it would be wise. – aceaudio Jun 10 '12 at 23:02
  • The delay varies from screen to screen but vis fixed if you keep the same configuration. Can you imagine measuring this delay to calibrate your system? – Martin Delille Jun 11 '12 at 04:41
  • You might try using `IDXGIOutput::WaitForVBlank` (http://msdn.microsoft.com/en-us/library/windows/desktop/bb174559(v=vs.85).aspx) to sync your application to the monitor's refresh rate – dave Jun 11 '12 at 06:22
  • @tinmaru are you sure it's fixed? My understanding is that once the image is sent to the graphics buffer it stays there until the next screen refresh occurs (immediately after the vertical blank), and because I've no idea where in the cycle the screen refresh is, I've no idea when the user _actually_ sees the image (as opposed to when the OS reports that the image has been drawn). My only other thought was if V-Sync is off does the buffer just release the image immediately? – aceaudio Jun 11 '12 at 09:31
  • @dave thanks that looks like that might just do the trick, I'll have a play. – aceaudio Jun 11 '12 at 09:31
  • I am not an expert, but assuming that research will be made on modern and always the same computer delays are going to be very similar. So if u are going to render only color couldn't you just assume that render time will be always better then refresh rate of monitor and to obtain "real" values just subtract 16.67 ms (for 60 Hz monitor)? – Archibald Jun 11 '12 at 11:14
  • ok now i get it next refresh might appear on 1ms and also on 16ms this value is not fixed, sry :). I am not sure about this but with fps>60 and V-Sync turned on it should be constant. – Archibald Jun 11 '12 at 11:27
  • @Archibald yep that's basically the problem. The image is sent to the graphics buffer at some unknown time in regards to the monitor refresh cycle, so the image may be released from the buffer almost immediately up to as much as 16ms after the buffer has received the image. I assume you mean V-Sync turned **off**? The more I read the more I think V-Sync off with a TFT screen _should_ theoretically create a constant. However I just want to be sure, or of course if I find a better way to deal with it, with a greater deal of certainty then I'll do my best to implement that. – aceaudio Jun 11 '12 at 11:47
  • @aceaudio It is fixed between the vsync event and the effective display of the frame. When VSync is enable, drawing in the backbuffer occurs immediately after the event so you already have this constant (depending of your refresh rate) and you must add a fixed value inducted by your display device. – Martin Delille Jun 11 '12 at 12:17
  • @tinmaru I understand there is fixed delay between the vsync event and the display of the frame, and another fixed delay between telling the screen to draw and it being written to the back buffer BUT isn't the time between it being written to the back buffer and being copied to the front buffer a variable? the front buffer is read at a constant interval whereas the method to write to the back buffer is completely variable, so the back buffer could be filled 2ms before it is copied to the front buffer or 12ms before it is copied to the front buffer. – aceaudio Jun 11 '12 at 15:57

2 Answers2

4

You must render your graphic in a separate thread in order to:

  • Use vertical synchronisation to have a precise timing of the effective display of your image.
  • Get the precise timing of your user input (since user interface is not on the same thread than the render loop.

Initialise Direct3D to enable the VSync during render :

// DirectX example
presentParams.SwapEffect = SwapEffect.Discard;
presentParams.BackBufferCount = 1;
presentParams.PresentationInterval = PresentInterval.One;

device = new Device(...

Perform the render in a separate thread:

Thread renderThread = new Thread(RenderLoop);
renderThread.Start();

shouldDisplayImageEvent = new AutoResetEvent();

Then use the following render loop:

void RenderLoop()
{
    while(applicationActive)
    {
          device.BeginScene();

        // Other rendering task

        if (shouldDisplayImageEvent.WaitOne(0))
        {
            // Render image
            // ...

            userResponseStopwatch = new Stopwatch();
            userResponseStopwatch.Start();
        }

        device.EndScene();

        device.Present();
    }
}

Then handle the user input :

void OnUserInput(object sender, EventArgs e)
{
    if (userResponseStopwatch != null)
    {
        userResponseStopwatch.Stop();

        float userResponseDuration = userResponseStopwatch.ElapsedMillisecond - 1000 / device.DisplayMode.RefreshRate - displayDeviceDelayConstant;
        userResponseStopwatch = null;
    }
}

You now use the shouldDisplayImageEvent.Set() event trigger to display the image as needed and start the stop watch.

Martin Delille
  • 11,360
  • 15
  • 65
  • 132
  • I think get it now! thank you very much for your patience and time. I'll give this a go! I take it there would be no obligation to use DirectX to render the images I could just continue to use my existing code to display images, however now I know that the time between executing the code to display the image, and the image being displayed, is constant. Right? – aceaudio Jun 12 '12 at 15:03
  • Looks as thou that's working for me, many thanks again, much appreciated. – aceaudio Jun 12 '12 at 16:12
  • You're welcome! I've received so much help from SO user that when I can give a hand I do it :) – Martin Delille Jun 12 '12 at 18:28
  • The time is constant if you use vertical synchronisation. – Martin Delille Jun 12 '12 at 18:28
2

First enable the VSync on your application idle loop :

// DirectX example
presentParams.SwapEffect = SwapEffect.Discard;
presentParams.BackBufferCount = 1;
presentParams.PresentationInterval = PresentInterval.One;

device = new Device(...

Application.Idle += new EventHandler(OnApplicationIdle);

// More on this here : http://blogs.msdn.com/tmiller/archive/2005/05/05/415008.aspx
internal void OnApplicationIdle(object sender, EventArgs e)
{
    Msg msg = new Msg();
    while (true)
    {
        if (PeekMessage(out msg, IntPtr.Zero, 0, 0, 0))
            break;
    }

    // Clearing render
    // ...

    if (displayImage)
    {
        // Render image
        // ...

        renderTime = DateTime.now();
    }
    device.Present();
}

With the vsync enabled, the device.Present function block until the next frame synchronisation, so if you compute the time between renderTime and the user input time and remove the display device delay + 16.67ms you should get your user response delay.

Martin Delille
  • 11,360
  • 15
  • 65
  • 132
  • Thanks for taking the time to post. Your have to excuse my ignorance, but I'm not sure I've _quite_ understood how this works yet. Is it that by setting: `presentParams.SwapEffect = SwapEffect.Discard; presentParams.BackBufferCount = 1; presentParams.PresentationInterval = PresentInterval.One;` we are saying that the image must first been written to a backbuffer, of which on the following cycle it will be written to the display? Then by using a while loop with peakmessage we are checking if this has been written to the display, by testing that there are no longer any messages in the que? – aceaudio Jun 11 '12 at 14:53
  • Ok _maybe_ I'm starting to understand this `Present()` copies the back buffer to the front buffer right? and I'm guessing isn't released from the que until this is complete? but if so shouldn't the timer be started after the while loop but before attempting to render the image as it isn't actually displayed at that point. – aceaudio Jun 11 '12 at 16:05
  • I don't believe `Present` has well-defined blocking behaviour. In practice, you will probably end up synced to the monitor's refresh rate, but that will be because the driver repeatedly hits some queueing limit which only gets relieved when a frame is pulled out for display at the end of the pipeline (which will happen every v-sync) – dave Jun 11 '12 at 20:25
  • The *Present()* method copy the back buffer to the front buffer then wait for the vsync event (about 16.67ms if the app is idle). when the vsync event occurs, the front buffer is effectively displayed on the screen. In this process you must consider that all the operations duration can be neglected compared to this *Present()* operation. – Martin Delille Jun 11 '12 at 20:35
  • @dave it is blocking: if you put a *Stopwatch* before and after the *Present()* operation you will meter 16ms for 60Hz display. – Martin Delille Jun 11 '12 at 20:39
  • @tinmaru surely only subsequent repeated calls to `Present()` would register 16.67ms the first call could be anything as there's nothing dictating _when_ the method is called (at least in relation to V-Sync, unless I'm mistaken), until of course it is in a loop in which case it will block for approx 16.67ms unfortunately I can't test this right now... however regardless this seems like a possible solution but I would have thought placing the timer after the present() method would give me better results. Surely I want to start the timer once I know the image has been released from the buffer? – aceaudio Jun 11 '12 at 22:04
  • @tinmaru I didn't say it wasn't blocking, just that I don't believe the blocking behaviour is as well defined as you're claiming. I have no doubt that in practice, once the pipeline gets filled, it will block waiting for v-sync every time – dave Jun 11 '12 at 22:06
  • yes you can start it after the *Present()* call, but you will have anyway to substract the delay induced by your display device. – Martin Delille Jun 12 '12 at 13:33
  • Wait I realized that there is a pitfall in this implementation because if you can have a precise timing of the display frame, your total duration will be a multiple of 16ms because the user input will not be processed during the present operation but after (during the peekmessage). I will add another answer or edit this one. – Martin Delille Jun 12 '12 at 13:41