I was doing a microbenchmark. My code appears something like this
while(some condition){
struct timespec tps, tpe;
clock_gettime(CLOCK_REALTIME, &tps);
encrypt_data(some_data)
clock_gettime(CLOCK_REALTIME, &tpe);
long time_diff = tpe.tv_nsec - tps.tv_nsec;
usleep(1000);
}
However, the sleep time that I put in usleep() actually affects the observed time_diff that I get. If I measure the execution of this code using the skeleton above the time I get varies from ~1.8us to ~7us for sleep time of 100us and 1000us respectively. Why would the measured time change with the change in sleep time, when sleep time is outside the instrumentation block?
The time results are average of multiple runs. I am using Ubuntu 14.04 to run this code. For encryption I am using aesgcm from openssl.
I know that this is not the best way to microbenchmark but that is not the problem here.