Does the niceness of a process matter for (micro) benchmarking? My intuition says that starting a benchmark with nice -20
would produce more precise results, since less context switches occur for the benchmark.
On the other hand, many tools or library functions not only allow to retrieve wall time but also CPU time. Additionally, a benchmark machine should not have other resource intensive processes running at the same time, so there would not be too much competition anyway.
As a naive approach, I wrote a simple program that measures wall-time in the hope to see a difference when starting the process with different niceness values:
#include <stdio.h>
#include <stdint.h>
#include <sys/time.h>
int main() {
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
int i;
for (i = 0; i < 2000000000; i++) {
}
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
printf("Time elapsed: %ld.%06ld\n", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);
return 0;
}
However, when measuring there is no consistent difference between starting the program with a high or low nice value. So my question: Does my benchmark not exercise a property influenced by niceness or is the niceness not relevant for this benchmark? Can the niceness value be relevant on a benchmark machine? And additionally: is the perf stats
context-switches
metrics suitable for measuring an impact of niceness?