2
/*
 * Returns time in s.usec
 */
float mtime()
{
    struct timeval stime;
    gettimeofday(&stime,0x0);
    return (float)stime.tv_sec+((float)stime.tv_usec)/1000000000;
}

main(){
    while(true){
        cout<<setprecision(15)<<mtime()<<endl;
        // shows the same time irregularly for some reason and can mess up triggers
        usleep(500000);
    }
}

Why does it show the same time irregularly? (compiled on ubuntu 64bit and C++) What other standard methods are available to generate a unix timestamp with millisecond accuracy?

Stefan Rogin
  • 1,499
  • 3
  • 25
  • 41

2 Answers2

4

A float has between 6 and 9 decimal digits of precision.

So if integer part is e.g. 1,391,432,494 (UNIX time when I write this; requiring 10 digits), you're already out of digits for the fractional part. Not so good, and this is why float is failing for this.

Jumping to double gives you 15 digits so it seems to suffice as long as you can assume that the integer part is a UNIX timestamp, i.e. seconds since 1970 since that means it's not likely to use drastically more digits any time soon.

unwind
  • 391,730
  • 64
  • 469
  • 606
  • so that means 1,391,432,494 would be seen as 1,391,432,49 ? or would it loop from the start? From my experiment it seems to stay locked more than 10s. – Stefan Rogin Feb 03 '14 at 13:06
  • 1
    No, the higher precision of the integer will be "mapped" to a smaller set of representable numbers in the float, basically. That's why the float isn't changing until the integer has moved far enough for its float representation to fall on a new number, so to speak. – unwind Feb 03 '14 at 13:08
  • 2
    @clickstefan Please read [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html). – unwind Feb 03 '14 at 13:10
0

Seems float doesn't have enough precision, replaced with double and all is ok now.

/*
 * Returns time in s.usec
 */
double mtime()
{
    struct timeval stime;
    gettimeofday(&stime,0x0);
    return (double)stime.tv_sec+((double)stime.tv_usec)/1000000000;
}

Still don't exactly understand the reason for the random behavior... PS. I was capturing a mtime() and comparing it with current time to get duration.

Stefan Rogin
  • 1,499
  • 3
  • 25
  • 41
  • I am using at the return :). Regarding why not returning in microseconds to the program, I also use time() for less precise measurements, and have option variables in the whole program that are set to second, and multiplying all of them to 1000000 could have caused problems . – Stefan Rogin Feb 03 '14 at 13:08