2

I'm trying to use __rdtscp intrinsinc function to measure time intervals. Target platform is Linux x64, CPU Intel Xeon X5550. Although constant_tsc flag is set for this processor, calibrating __rdtscp gives very different results:

$ taskset -c 1 ./ticks
Ticks per usec: 256
$ taskset -c 1 ./ticks
Ticks per usec: 330.667
$ taskset -c 1 ./ticks
Ticks per usec: 345.043
$ taskset -c 1 ./ticks
Ticks per usec: 166.054
$ taskset -c 1 ./ticks
Ticks per usec: 256
$ taskset -c 1 ./ticks
Ticks per usec: 345.043
$ taskset -c 1 ./ticks
Ticks per usec: 256
$ taskset -c 1 ./ticks
Ticks per usec: 330.667
$ taskset -c 1 ./ticks
Ticks per usec: 256
$ taskset -c 1 ./ticks
Ticks per usec: 330.667
$ taskset -c 1 ./ticks
Ticks per usec: 330.667
$ taskset -c 1 ./ticks
Ticks per usec: 345.043
$ taskset -c 1 ./ticks
Ticks per usec: 256
$ taskset -c 1 ./ticks
Ticks per usec: 125.388
$ taskset -c 1 ./ticks
Ticks per usec: 360.727
$ taskset -c 1 ./ticks
Ticks per usec: 345.043

As we can see the difference between program executions can be up to 3 times (125-360). Such instability is not appropriate for any measurements.

Here is the code (gcc 4.9.3, running on Oracle Linux 6.6, kernel 3.8.13-55.1.2.el6uek.x86_64):

// g++ -O3 -std=c++11 -Wall ticks.cpp -o ticks
#include <x86intrin.h>
#include <ctime>
#include <cstdint>
#include <iostream>

int main()
{       
    timespec start, end;
    uint64_t s = 0;

    const double rdtsc_ticks_per_usec = [&]()
    {
        unsigned int dummy;

        clock_gettime(CLOCK_MONOTONIC, &start);

        uint64_t rd_start = __rdtscp(&dummy);
        for (size_t i = 0; i < 1000000; ++i) ++s;
        uint64_t rd_end = __rdtscp(&dummy);

        clock_gettime(CLOCK_MONOTONIC, &end);

        double usec_dur = double(end.tv_sec) * 1E6 + end.tv_nsec / 1E3;
        usec_dur -= double(start.tv_sec) * 1E6 + start.tv_nsec / 1E3;

        return (double)(rd_end - rd_start) / usec_dur;
    }();

    std::cout << s << std::endl;
    std::cout << "Ticks per usec: " << rdtsc_ticks_per_usec << std::endl;
    return 0;
}

When I run very similar program under Windows 7, i7-4470, VS2015 the result of calibration is pretty stable, small difference in last digit only.

So the question - what is that issue about? Is it CPU issue, Linux issue or my code issue?

Rost
  • 8,779
  • 28
  • 50
  • 2
    `constant_tsc` means that the TSC is *not* tied to the CPU's actual frequency. Besides the problem of your timed loop optimizing away, you should run a warm-up loop to get the CPU up to max turbo clockspeed before the timing loop starts. – Peter Cordes Mar 19 '16 at 22:28

2 Answers2

3

Other sources of jitter will be there if you don't also ensure the cpu is isolated. You really want to avoid having another process scheduled on that core. Also ideally, you run a tickless kernel so that you never run kernel code at all on that core. In the above code I guess that only is going to matter if you get unlucky enough to get the tick or context switch between the call to clock_gettime() and __rdtscp

Making s volatile is another way to defeat that kind of compiler optimisation.

Hal
  • 1,061
  • 7
  • 20
2

Definitely it was my code (or gcc) issue. Compiler optimized out the loop replacing it with s = 1000000.

To prevent gcc to optimize this calibrating loop shall be changed this way:

for (size_t i = 0; i < 1000000; ++i) s += i;

Or more simple and correct way (thanks to Hal):

for (volatile size_t i = 0; i < 1000000; ++i);
Rost
  • 8,779
  • 28
  • 50
  • If someone answers your question correctly it's generally polite to mark their answer as accepted rather than marking your own explanation of how that answer was correct as accepted... – Hal Nov 09 '17 at 02:16