While working on a timing sensitive project, I used the code below to test the granularity of timing events available, first on my desktop machine in Firefox, then as node.js code on my Linux server. The Firefox run produced predictable results, averaging 200 fps on a 1ms timeout and indicating I had timing events with 5ms granularity.
Now I know that if I used a timeout value of 0, the Chrome V8 engine Node.js is built on would not actually delegate the timeout to an event but process it immediately. As expected, the numbers averaged 60,000 fps, clearly processing constantly at CPU capacity (and verified with top). But with a 1ms timeout the numbers were still around 3.5-4 thousand cycle()'s per second, meaning Node.js cannot possibly be respecting the 1ms timeout which would create a theoretical maximum of 1 thousand cycle()'s per second.
Playing with a range of numbers, I get:
- 2ms: ~100 fps (true timeout, indicating 10ms granularity of timing events on Linux)
- 1.5: same
- 1.0001: same
- 1.0: 3,500 - 4,500 fps
- 0.99: 2,800 - 3,600 fps
- 0.5: 1,100 - 2,800 fps
- 0.0001: 1,800 - 3,300 fps
- 0.0: ~60,000 fps
The behavior of setTimeout(func, 0) seems excusable, because the ECMAScript specification presumably makes no promise of setTimout delegating the call to an actual OS-level interrupt. But the result for anything 0 < x <= 1.0 is clearly ridiculous. I gave an explicit amount of time to delay, and the theoretical minimum time for n calls on x delay should be (n-1)*x. What the heck is V8/Node.js doing?
var timer, counter = 0, time = new Date().getTime();
function cycle() {
counter++;
var curT = new Date().getTime();
if(curT - time > 1000) {
console.log(counter+" fps");
time += 1000;
counter = 0;
}
timer = setTimeout(cycle, 1);
}
function stop() {
clearTimeout(timer);
}
setTimeout(stop, 10000);
cycle();