3

I ran an experiment to compare sleep/pause timing accuracy in python and C++

Experiment summary:

In a loop of 1000000 iterations, sleep 1 microsecond in each iteration.

Expected duration: 1.000000 second (for 100% accurate program)

In python:

import pause
import datetime

start = time.time()
dt = datetime.datetime.now()
for i in range(1000000):
    dt += datetime.timedelta(microseconds=1)
    pause.until(dt)
end = time.time()
print(end - start)

Expected: 1.000000 sec, Actual (approximate): 2.603796

In C++:

#include <iostream>
#include <chrono>
#include <thread>

using namespace std;

using usec = std::chrono::microseconds;
using datetime = chrono::_V2::steady_clock::time_point;
using clk = chrono::_V2::steady_clock;

int main()
{
    datetime dt;
    usec timedelta = static_cast<usec>(1);

    dt = clk::now();

    const auto start = dt;

    for(int i=0; i < 1000000; ++i) {
        dt += timedelta;
        this_thread::sleep_until(dt);
    }

    const auto end = clk::now();

    chrono::duration<double> elapsed_seconds = end - start;

    cout << elapsed_seconds.count();

    return 0;
}

Expected: 1.000000 sec, Actual (approximate): 1.000040

It is obvious that C++ is much more accurate, but I am developing a project in python and need to increase the accuracy. Any ideas?

P.S It's OK if you suggest another python library/technique as long as it is more accurate :)

Ahmed Hussein
  • 715
  • 1
  • 15
  • 38

4 Answers4

2

The problem is not only that the sleep timer of python is inaccurate, but that each part of the loop requires some time.

Your original code has a run-time of ~1.9528656005859375 on my system.

If I only run this part of your code without any sleep:

for i in range(100000):
   dt += datetime.timedelta(microseconds=1)

Then the required time for that loop is already ~0.45999741554260254.

If I only run

for i in range(1000000):
   pause.milliseconds(0)

Then the run-time of the code is ~0.5583224296569824.

Using always the same date:

dt = datetime.datetime.now()
for i in range(1000000):
    pause.until(dt)

Results in a runtime of ~1.326077938079834

If you do the same with the timestamp:

dt = datetime.datetime.now()
ts = dt.timestamp()
for i in range(1000000):
    pause.until(ts)

Then the run-time changes to ~0.36722803115844727

And if you increment the timestamp with one microsecond:

dt = datetime.datetime.now()
ts = dt.timestamp()
for i in range(1000000):
    ts += 0.000001
    pause.until(ts)

Then you get a runtime of ~0.9536933898925781

That it is smaller then 1 is due to floating point inaccuracies, adding print(ts-dt.timestamp()) after the loop will show ~0.95367431640625, so the pause duration itself is correct, but the ts += 0.000001 is accumulating an error.

You will get the best result if you count the iterations you had and add iterationCount/1000000 to the start time:

dt = datetime.datetime.now()
ts = dt.timestamp()
for i in range(1000000):
    pause.until(ts+i/1000000)

And this would result in ~1.000023365020752

So in my case pause itself would already allow an accuracy with less then 1 microsecond. The problem is actually in the datetime part that is required for both datetime.timedelta and sleep_until.

So if you want to have microseconds accuracy then you need to look for a time library that performs better then datetime.

Ahmed Hussein
  • 715
  • 1
  • 15
  • 38
t.niese
  • 39,256
  • 9
  • 74
  • 101
  • I agree. Do you suggest any library that is more accurate ? – Ahmed Hussein Mar 04 '19 at 14:47
  • 1
    @AhmedHussein the problem is not the accuracy, the date functions are _"accurate"_. The problem is the run-time of those date functions. You could work directly on the timestamp, then you only have the floating point inaccuracy. – t.niese Mar 04 '19 at 15:04
  • What do you mean by working directly on the timestamp ? Code you provide an example? – Ahmed Hussein Mar 04 '19 at 15:08
  • 2
    @AhmedHussein the last two code blocks of my answer. – t.niese Mar 04 '19 at 15:10
  • Can we increase accuracy to nanoseconds ? It will be very helpful in my application – Ahmed Hussein Mar 04 '19 at 16:11
  • @AhmedHussein That's a completely different question, and there for you should create a new question for it. Anyway a design that requires pause and microsecond precision is already questionable. Nano seconds in that design doesn't make to much sense, it seems to me that you should change something in your program design. – t.niese Mar 04 '19 at 16:37
0
import pause
import datetime
import time

start = time.time()
dt = datetime.datetime.now()

for i in range(1000000):
    dt += datetime.timedelta(microseconds=1)
    pause.until(1) 
end = time.time()
print(end - start)

OUTPUT:

1.0014092922210693
DirtyBit
  • 16,613
  • 4
  • 34
  • 55
  • 1
    Oh, it seems that your answer is not correct. According to @bmat comment: > "By the way I think `pause.until(1)` does not do anything as this is expecting a unix time stamp - en.wikipedia.org/wiki/Unix_time e.g. number of milliseconds since 1970 etc". If you remove the line: `dt += datetime.timedelta(microseconds=1)` it won't take 1 sec. anymore. Am I right ? – Ahmed Hussein Mar 04 '19 at 12:09
0

The pause library says that

The precision should be within 0.001 of a second, however, this will depend on how >precise your system sleep is and other performance factors.

If you multiply 0.001 by 1000000 you will get a large accumulated error.

A couple of questions:

Why do you need to sleep?

What is the minimum required accuracy?

How time consistent are the operations you are calling? If these function calls vary by more than 0.001 then the accumulated error will be more due to the operations you are performing than can be attributed to the pauses/sleeps.

bmat
  • 224
  • 3
  • 5
  • 1
    * Why I need to do this is irrelevant .. it is an experiment * No min/max accuracy specified .. I need to make it as much accurate as possible * Concerning operations, I didn't add any operations in the experiment. Because in the ideal case, we omit operation time – Ahmed Hussein Mar 04 '19 at 11:20
  • I don't think it's possible to obtain microsecond accuracy for putting a thread to sleep in python. – bmat Mar 04 '19 at 11:50
  • 1
    By the way I think `pause.until(1)` does not do anything as this is expecting a unix time stamp - https://en.wikipedia.org/wiki/Unix_time e.g. number of milliseconds since 1970 etc – bmat Mar 04 '19 at 11:52
  • bmat's comment seems to be correct. @user5173426 Please check his comment :) – Ahmed Hussein Mar 04 '19 at 12:13
0

Sleeping a thread is inherently non-deterministic - you cannot talk about 'precision' really for thread sleeep in general - perhaps only in the context of a particular system and platform - there are just too many factors that can possibly play a role for example how many cpu cores, etc..

To illustrate the point, a thought experiment:

Suppose you made many threads (at least 1000) and scheduled them to run at the same exact time. What 'precision' would you then expect ?

darune
  • 10,480
  • 2
  • 24
  • 62
  • I understand your point. But in my experiment, the same machine was used both in python and C++ with 1 thread and same specs. – Ahmed Hussein Mar 04 '19 at 13:55