I used the command perf record -a
to measure the performance counters on my system and perf script
to obtain the results, which look like this
[000] 109528.087598: 1 cycles
[000] 109528.100038: 5072 cycles
[000] 109528.120034: 4878 cycles
[000] 109528.144032: 4514 cycles
Let's say I am running this on a 3.3GHz CPU. From the formula CPU freq = number of cycles / time
we get that in a microsecond we have 3.3 * 10^3 cycles.
My question is why does it take on average 3.95 microseconds to measure one cycle, when from the formula it should clearly take less?