I am trying to understand the precision of the gettimeofday()
system call. Here's my program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/time.h>
int main(int argc, char *argv[])
{
struct timeval t;
int prev = 0;
for (int i = 0; i < atoi(argv[1]); i++)
{
gettimeofday(&t, NULL);
printf("secs: %ld, nmicro_secs: %d, delta: %d\n", t.tv_sec, t.tv_usec, t.tv_usec - prev);
prev = t.tv_usec;
sleep(1);
}
return 0;
}
The output if I run this program (./a.out 10
) is,
secs: 1643494972, nmicro_secs: 485698, delta: 485698
secs: 1643494973, nmicro_secs: 490785, delta: 5087
secs: 1643494974, nmicro_secs: 491121, delta: 336
secs: 1643494975, nmicro_secs: 494810, delta: 3689
secs: 1643494976, nmicro_secs: 500034, delta: 5224
secs: 1643494977, nmicro_secs: 501143, delta: 1109
secs: 1643494978, nmicro_secs: 506397, delta: 5254
secs: 1643494979, nmicro_secs: 509905, delta: 3508
secs: 1643494980, nmicro_secs: 510637, delta: 732
secs: 1643494981, nmicro_secs: 513451, delta: 2814
The seconds column seems to reconcile with the sleep of 1 secs. Can someone please explain what's going on with the values in micro seconds column? It looks like the jumps from sleep to sleep are ~ 1 millisecond.