0

I have a loop which runs every X usecs, which consists of doing some I/O then sleeping for the remainder of the X usecs. To (roughly) calculate the sleep time, all I'm doing is taking a timestamp before and after the I/O and subtract the difference from X. Here is the function I'm using for the timestamp:

long long getus ()
{
        struct timeval time;
        gettimeofday(&time, NULL);
        return (long long) (time.tv_sec + time.tv_usec);
}

As you can imagine, this starts to drift pretty fast and the actual time between I/O bursts is usually quite a few ms longer than X. To try and make it a little more accurate, I thought maybe if I keep a record of the previous starting timestamp, every time I start a new cycle I can calculate how long the previous cycle took (the time between this starting timestamp and the previous one). Then, I know how much longer than X it was, and I can modify my sleep for this cycle to compensate.

Here is how I'm trying to implement it:

    long long start, finish, offset, previous, remaining_usecs;
    long long delaytime_us = 1000000;

    /* Initialise previous timestamp as 1000000us ago*/
    previous = getus() - delaytime_us;
    while(1)
    {
            /* starting timestamp */
            start = getus();

            /* here is where I would do some I/O */

            /* calculate how much to compensate */
            offset = (start - previous) - delaytime_us;

            printf("(%lld - %lld) - %lld = %lld\n", 
                    start, previous, delaytime_us, offset);

            previous = start;

            finish = getus();

            /* calculate to our best ability how long we spent on I/O.
             * We'll try and compensate for its inaccuracy next time around!*/
            remaining_usecs = (delaytime_us - (finish - start)) - offset;

            printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
                    start, finish, offset, previous, remaining_usecs);

            usleep(remaining_usecs);

    }

It appears to work on the first iteration of the loop, however after that things get messed up.

Here's the output for 5 iterations of the loop:

(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642

(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701

(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226

(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761

(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507

The first line of output shows how the previous cycle time was calculated. It appears that the first two timestamps are basically 1000000us apart (1412452353 - 1411452348 = 1000005). However after this the distance between starting timestamps starts looking not so reasonable, along with the offset. Does anyone know what I'm doing wrong here?

EDIT: I would also welcome suggestions of better ways to get an accurate timer and be able to sleep during the delay!

Erik Nyquist
  • 1,267
  • 2
  • 12
  • 26

1 Answers1

0

After some more research I've discovered two things wrong here- Firstly, I'm calculating the timestamp wrong. getus() should return like this:

return (long long) 1000000 * (time.tv_sec + time.tv_usec);

And secondly, I should be storing the timestamp in unsigned long long or uint64_t. So getus() should look like this:

uint64_t getus ()
{
        struct timeval time;
        gettimeofday(&time, NULL);
        return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}

I won't actually be able to test this until tomorrow, so I will report back.

Erik Nyquist
  • 1,267
  • 2
  • 12
  • 26