I used Sleep(500)
in my code and I used getTickCount()
to test the timing. I found that it has a cost of about 515ms, more than 500. Does somebody know why that is?
-
35getTickCount has a granularity of about 10-16 ms. – this Nov 12 '15 at 12:32
-
47`Sleep(n)` does not guarantee that you sleep for exactly `n`ms, just that you sleep for _at least_ `n`ms. – David says Reinstate Monica Nov 12 '15 at 14:56
-
4The question is still relevant without the winapi tag. – Octopus Nov 12 '15 at 20:48
-
7@DavidGrinberg: I'm surprised, that your comment got so many votes for being useful, when it isn't even true. The documentation of [Sleep](https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298.aspx) does not back your claim. Specifically: *"If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time."* and *"If dwMilliseconds is greater than one tick but less than two, the wait can be **anywhere** between one and two ticks, and so on"* (emphasis mine). – IInspectable Nov 15 '15 at 13:41
-
I have a laptop that regularly returns from `Sleep` calls at about half the time requested, so those answers reporting "`Sleep` guarantees at least `dwMilliseconds` sleep", I'm not even agreeing with that. I think it is related to [Intel's SpeedStep](http://superuser.com/q/183508/39835) but I'm not sure enough to mention this in an answer. – Mark Hurd Nov 18 '15 at 00:43
-
@MarkHurd: Unless you are talking about very short durations (in the range of the system clock) it sounds more like you have buggy chipset drivers. The documentation does guarantee that `Sleep` will return no sooner than the requested time minus one system clock tick (or 0, whichever is larger). – IInspectable Nov 18 '15 at 17:00
7 Answers
Because Win32 API's Sleep
isn't a high-precision sleep, and has a maximum granularity.
The best way to get a precision sleep is to sleep a bit less (~50 ms) and do a busy-wait. To find the exact amount of time you need to busywait, get the resolution of the system clock using timeGetDevCaps
and multiply by 1.5 or 2 to be safe.

- 112,504
- 36
- 218
- 315
-
11As we say around these parts, "one person's realtime system is another person's batch system". It's all about how tolerable the latency can be, along with the variability of that latency. Realtime behavior is usually a very critical design decision, not just from a software API standpoint, but what I/O is there, what it's hooked up to (or not hooked up to, think timeouts), and how it affects the "realtime" latency. Blocked calls vs. asynchronous calls, etc. – franji1 Nov 12 '15 at 12:48
-
5@franji1 There's another use to sleep I did not mention in my answer, which is very relevant for one type of application: games. The default in games is to busywait until the next frame. This is incredibly power and CPU hungry. Simply doing a sleep if the time until next frame is larger than a system tick away and busywaiting the rest has brought down CPU usage from 100% to less than 5% on some games. – orlp Nov 12 '15 at 12:51
-
In the good old Amstrad and Spectrum game-writing days, you could use a firmware interrupt with extreme reliability and negligible cpu burn. We could even change the graphics "mode" a certain way down the scren. Sadly I've moved away from this field now, but can't you do something similar? – Bathsheba Nov 12 '15 at 12:54
-
@Bathsheba I'm sure there's always more options when targeting specific hardware - the code I've written for games has always been at the OS level, and cross-platform (generally one implementation for linux and another for Windows). – orlp Nov 12 '15 at 13:01
-
2Games usually do not render at fixed time intervals. Instead, they render as fast as possible, and base the scene information on the current timestamp. Games don't use `Sleep`. – IInspectable Nov 12 '15 at 14:46
-
7@IInspectable Nearly every game has an option to lock at 60 or 120 FPS. A proper implementation of those locks sleep. Common vsync implementation block, and thus sleep. New gsync/freesync also blocks, and thus implicitly sleep. – orlp Nov 12 '15 at 14:47
-
6Games that have an option to synchronize with the display's refresh rate do so by using the hardware's capabilities to wait for the *vertical blank*. They do not use `Sleep`. – IInspectable Nov 12 '15 at 14:50
-
While `beginTimePeriod` has its downsides, I'd prefer it over a busy-wait in most scenarios. – CodesInChaos Nov 12 '15 at 15:01
-
@CodesInChaos If precision is required a busywait is unavoidable. My answer suggests a combination of busywaiting and sleeping. – orlp Nov 12 '15 at 15:18
-
Do not forget that at those small resolutions, you may not even be getting a timeslot from the os scheduler at the time you need it. – kat0r Nov 12 '15 at 15:58
-
@Bathsheba This only started to be a problem when power management became an issue - in the olden days, the CPU was running at 100% all the time anyway. Modern OSes have support for event-based timers, which allow you a more granular control over timer events - however, for backwards compatibility, the old quantum-based timers are still supported, and used in functions like `Thread.Sleep`. And of course, if you don't need more update cycles than you have GPU refreshes, you can use vsync to block until the frame is rendered to screen. – Luaan Nov 12 '15 at 16:27
-
1The OS environment on "Amstrad and Spectrum" allowed you access to such features because it could afford to, since reliable multitasking operation was not expected of these systems. If you could directly hook into that kind of timer interrupt on a modern OS, you could effectively *preempt the operating system* ... and crash it very easily by accident or malice. You still have timers like that in an RTOS environment, or in small microcontroller environments (like MCS51) that can reasonably be used without an OS. – rackandboneman Nov 12 '15 at 22:10
-
The same is true of `sleep` functions in basically every programming language. You wait at least x milliseconds and then wait for your process to be scheduled by the kernel – mcfedr Nov 13 '15 at 00:30
-
Notice that even if you sleep for slightly less than the desired time and then busywait, you will still occasionally end up sleeping significantly longer than desired. This happens whenever the sleeping thread is preempted in-between starting the busywait and getting to the next command, which generally happens nondeterministically and depends on what else is going on in the system at the time. – Kevin Nov 13 '15 at 06:21
-
Busy waiting does not give hard guarantees either. Another process can be scheduled in the middle of your loop. – otus Nov 13 '15 at 11:55
sleep(500)
guarantees a sleep of at least 500ms.
But it might sleep for longer than that: the upper limit is not defined.
In your case, there will also be the extra overhead in calling getTickCount()
.
Your non-standard Sleep
function may well behave in a different matter; but I doubt that exactness is guaranteed. To do that, you need special hardware.

- 231,907
- 34
- 361
- 483
-
MSDN explicitly mentions an accuracy up to the system clock ticks and makes no mention of an undefined upper limit. – orlp Nov 12 '15 at 12:37
-
-
3The `Sleep` function is not a C++ standard function, it's Win32 API specific, just like `GetTickCount`. – orlp Nov 12 '15 at 12:38
-
That might be the case, but the OP tags the question with the lower case `sleep`. Of course that might be in error, but I believe my answer has credibility. In reality, you can't really perform an accurate pause without extra hardware. – Bathsheba Nov 12 '15 at 12:42
-
10Tags can not contain uppercase letters, a moot point. Plus, the combination of `Sleep` and `GetTickCount` occurs to my knowledge only in Win32 API. – orlp Nov 12 '15 at 12:46
-
5@orlp the indefinite upper limit comes about because there's no promise that your process will be scheduled immediately after its allotted sleep completes. A higher-priority task might hold the CPU for any length of time before you run again. There's also no promise that a meteor won't hit the computer before your sleep completes :) – hobbs Nov 12 '15 at 20:05
-
The MSDN documentation for [Sleep](https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298.aspx) doesn't specify an upper limit. But you got the lower limit wrong as well, so your answer really has very limited value. – IInspectable Nov 17 '15 at 21:49
As you can read in the documentation, the WinAPI function GetTickCount()
is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.
To get a more accurate time measurement, use the function GetSystemDatePreciseAsFileTime
Also, you can not rely on Sleep(500)
to sleep exactly 500 milliseconds. It will suspend the thread for at least 500 milliseconds. The operating system will then continue the thread as soon as it has a timeslot available. When there are many other tasks running on the operating system, there might be a delay.
In general sleeping means that your thread goes to a waiting state and after 500ms it will be in a "runnable" state. Then the OS scheduler chooses to run something according to the priority and number of runnable processes at that time. So if you do have high precision sleep and high precision clock then it is still a sleep for at least 500ms, not exactly 500ms.

- 16,144
- 10
- 57
- 99
-
2+1 For mentioning the scheduler and the real reason why it will sleep for *at least* the requested time. – Jordan Melo Nov 12 '15 at 22:36
-
@JordanMelo: Except, that [Sleep](https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298.aspx) **doesn't** sleep *"for at least the requested time"*. – IInspectable Nov 17 '15 at 22:01
-
@IInspectable: Sorry, I didn't mean in general. But you're right: due to granularity of the system clock, the thread may sleep for slightly less than the requested time. My point is that you can't really rely on an accurate sleep time, even with a more granular clock. – Jordan Melo Nov 17 '15 at 22:15
-
1@JordanMelo: Granularity has nothing to do with it. You can implement the guarantee to sleep *at least* the requested time, regardless of granularity. `Sleep` **decided** not to, and this contract is documented. You are right, still, that you cannot rely on a contract that doesn't exist. – IInspectable Nov 17 '15 at 22:20
Like the other answers have noted, Sleep()
has limited accuracy. Actually, no implementation of a Sleep()
-like function can be perfectly accurate, for several reasons:
It takes some time to actually call
Sleep()
. While an implementation aiming for maximal accuracy could attempt to measure and compensate for this overhead, few bother. (And, in any case, the overhead can vary due to many causes, including CPU and memory use.)Even if the underlying timer used by
Sleep()
fires at exactly the desired time, there's no guarantee that your process will actually be rescheduled immediately after waking up. Your process might have been swapped out while it was sleeping, or other processes might be hogging the CPU.It's possible that the OS cannot wake your process up at the requested time, e.g. because the computer is in suspend mode. In such a case, it's quite possible that your 500ms
Sleep()
call will actually end up taking several hours or days.
Also, even if Sleep()
was perfectly accurate, the code you want to run after sleeping will inevitably consume some extra time.
Thus, to perform some action (e.g. redrawing the screen, or updating game logic) at regular intervals, the standard solution is to use a compensated Sleep()
loop. That is, you maintain a regularly incrementing time counter indicating when the next action should occur, and compare this target time with the current system time to dynamically adjust your sleep time.
Some extra care needs to be taken to deal with unexpected large time jumps, e.g. if the computer was temporarily suspected or if the tick counter wrapped around, as well as the situation where processing the action ends up taking more time than is available before the next action, causing the loop to lag behind.
Here's a quick example implementation (in pseudocode) that should handle both of these issues:
int interval = 500, giveUpThreshold = 10*interval;
int nextTarget = GetTickCount();
bool active = doAction();
while (active) {
nextTarget += interval;
int delta = nextTarget - GetTickCount();
if (delta > giveUpThreshold || delta < -giveUpThreshold) {
// either we're hopelessly behind schedule, or something
// weird happened; either way, give up and reset the target
nextTarget = GetTickCount();
} else if (delta > 0) {
Sleep(delta);
}
active = doAction();
}
This will ensure that doAction()
will be called on average once every interval
milliseconds, at least as long as it doesn't consistently consume more time than that, and as long as no large time jumps occur. The exact time between successive calls may vary, but any such variation will be compensated for on the next interation.

- 49,047
- 9
- 93
- 153
Default timer resolution is low, you could increase time resolution if necessary. MSDN
#define TARGET_RESOLUTION 1 // 1-millisecond target resolution
TIMECAPS tc;
UINT wTimerRes;
if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR)
{
// Error; application can't continue.
}
wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
timeBeginPeriod(wTimerRes);

- 21,590
- 4
- 32
- 52
-
1This may not be accurate enough compared to busywaiting and I wouldn't mess with the system clock. Other processes might get messed up, and according to MSDN: _"Use caution when calling timeBeginPeriod, as frequent calls can significantly affect the system clock, system power usage, and the scheduler."_. – orlp Nov 12 '15 at 12:42
-
My previous comment wasn't entirely accurate, `timeBeginPeriod` doesn't mess up other processes. – orlp Nov 12 '15 at 16:08
-
@orlp The advice doesn't really apply to having a foreground fullscreen media application running. It's targeted at offenders like, say, Chrome. Many applications change the quantum and keep it changed even when they don't really need the extra precision. In any case, this has finally been solved (in Win 7 or 8, I think) - the system timer no longer cares about the quantum, it's event-based. Yay :) – Luaan Nov 12 '15 at 16:31
There are two general reasons why code might want a function like "sleep":
It has some task which can be performed at any time that is at least some distance in the future.
It has some task which should be performed as near as possible to some moment in time some distance in the future.
In a good system, there should be separate ways of issuing those kinds of requests; Windows makes the first easier than the second.
Suppose there is one CPU and three threads in the system, all doing useful work until, one second before midnight, one of the threads says it won't have anything useful to do for at least a second. At that point, the system will devote execution to the remaining two threads. If, 1ms before midnight, one of those threads decides it won't have anything useful to do for at least a second, the system will switch control to the last remaining thread.
When midnight rolls around, the original first thread will become available to run, but since the presently-executing thread will have only had the CPU for a millisecond at that point, there's no particular reason the original first thread should be considered more "worthy" of CPU time than the other thread which just got control. Since switching threads isn't free, the OS may very well decide that the thread that presently has the CPU should keep it until it blocks on something or has used up a whole time slice.
It might be nice if there were a version of "sleep" which were easier to use than multi-media timers but would request that the system give the thread a temporary priority boost when it becomes eligible to run again, or better yet a variation of "sleep" which would specify a minimum time and a "priority- boost" time, for tasks which need to be performed within a certain time window. I don't know of any systems that can be easily made to work that way, though.

- 77,689
- 9
- 166
- 211
-
1Actually, I think Win32 `CreateWaitableTimer` works *exactly* the way you ask for -- easy to use and automatically gives a temporary (dynamic) priority boost. See [MSDN "Priority Boosts" article](https://msdn.microsoft.com/en-us/library/windows/desktop/ms684828(v=vs.85).aspx): "When the wait conditions for a blocked thread are satisfied, the scheduler boosts the priority of the thread. For example, when a wait operation associated with disk or keyboard I/O finishes, the thread receives a priority boost." The thread is more worthy of attention, because it's been patiently waiting. – Ben Voigt Nov 12 '15 at 22:17