Trying to set up a benchmark in R, I run a large set of different inputs on a .Call()
and measure (=median of 100 replicates) the elapsed time with:
system.time()
microbenchmark()
- checking
proc.time()
before and after the function call.
Confirmingly, results from 1 - 3 are quite comparable.
However, I noted that the time for running this function deviates around integers of milliseconds for all three approaches (Windows 7). Notably, I have another implementation of the benchmark, all written in R and using extensive R-internal (and thus C-coded) functions. The speed is only slightly slower, BUT - using 1 - 3 - can seemingly be pinpointed down to the nanosecond-level.
Question: Why do results deviate at the ms-level for .Call()
but not for the latter implementation?