5

I am trying to implement my own version of clock() using asm and rdtsc. However I am quite unsure about its return value. Is it cycles? Oder is it micro seconds? I am also confused about CLOCKS_PER_SEC. How can this be constant?

Is there any kind of formula which sets these values into relation?

今天春天
  • 941
  • 1
  • 13
  • 27
  • 1
    From [this `clock` reference](http://en.cppreference.com/w/c/chrono/clock): "Returns the approximate processor time used by the process since the beginning of an implementation-defined era related to the program's execution. To convert result value to seconds, divide it by CLOCKS_PER_SEC." That's about it. Exactly what unit the function returns is up to the implementation. – Some programmer dude Apr 18 '16 at 14:33
  • It does not take a formula, just the collaboration of the programmer that wrote the clock() function. Since he knows how it behaves, he can also write the #define for CLOCKS_PER_SEC. The unit for rdtsc is "ticks", it varies from one machine to another. You cannot know how long a tick takes unless you calibrate it against a known-good clock. Like clock(). – Hans Passant Apr 18 '16 at 14:53
  • Assuming the programmer wants to follow POSIX... – 今天春天 Apr 18 '16 at 15:05
  • Can it return micro seconds? – 今天春天 Apr 18 '16 at 15:06
  • @今天春天 it can, as said, but you restrict the time span that can be measured. Suppose `sizeof(clock_t)` is 4 meaning 32 bits. Your clock count will wrap in just over an hour. That might be good for timing short periods, but `clock()` is supposed to return the number of ticks since the program started running. – Weather Vane Apr 18 '16 at 15:34
  • But ticks must be some unit which gives second divided by CLOCKS_PER_SEC aka mikroseconds. – 今天春天 Apr 18 '16 at 16:03
  • @今天春天 on my system `time.h` has `#define CLOCKS_PER_SEC 1000`. As said above, it is implementation defined, and `clock_t` might be 64 bit. – Weather Vane Apr 18 '16 at 16:14
  • But 'clock()' and 'CLOCKS_PER_SEC' must always be adjusted to each other (which is the task of the developer who writes 'clock()')? If the developer decides to return mikroseconds, it is ok? – 今天春天 Apr 18 '16 at 16:20
  • @今天春天 it has already been said. `clock()` returns a tick count. In my case the unit of tick is 1/1000 second. Please start at the first comment above, and follow the link. The whole point of `CLOCKS_PER_SEC` is to tell the caller the unit of the tick. – Weather Vane Apr 18 '16 at 16:21
  • Also, using asm may not be the best choice. If you are using gcc, consider using __builtin_ia32_rdtsc(). – David Wohlferd Apr 18 '16 at 20:42

1 Answers1

6

You can find a rdtsc reference implementation here:

https://github.com/LITMUS-RT/liblitmus/blob/master/arch/x86/include/asm/cycles.h

TSC counts the number of cycles since reset. If you need a time value unit in seconds, you also need to read the CPU clock frequency and divide TSC value by frequency. However, this may not be accurate if the CPU frequency scaling is enabled. Recent Intel processors include a constant rate TSC (identified by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states.

https://en.wikipedia.org/wiki/Time_Stamp_Counter

Wei Shen
  • 2,014
  • 19
  • 17