I probably missed some important point of getrusage() function.
When I run the following code, I get:
User time: 0.000000 s System time: 0.000000 s Total time: 0.000000 s
void naiveSearch(unsigned char *text, unsigned int textLen, unsigned char *pattern, unsigned int pattLen)
{
register unsigned int count = 0;
for(int i=0;i<=textLen-pattLen;i++)
{
int j;
for(j=0;j<pattLen;j++)
if((unsigned char)text[i+j] != (unsigned char)pattern[j])
break;
if(j == pattLen)
count++;
}
//printf("naiveSearch: count = %d\n",count);
}
int main(int argc, char * argv[])
{
struct rusage ruse;
double ssec1, ssec2, usec1, usec2;
getrusage(RUSAGE_SELF,&ruse);
ssec1 = (double)(ruse.ru_stime.tv_sec * 1000000 + ruse.ru_stime.tv_usec);
usec1 = (double)(ruse.ru_utime.tv_sec * 1000000 + ruse.ru_utime.tv_usec);
for(int k=0;k<1000;k++)
naiveSearch(writeBuffer,outSize,writeBuffer+rand,16);
getrusage(RUSAGE_SELF,&ruse);
ssec2 = (double)(ruse.ru_stime.tv_sec * 1000000 + ruse.ru_stime.tv_usec);
usec2 = (double)(ruse.ru_utime.tv_sec * 1000000 + ruse.ru_utime.tv_usec);
printf("User time:\t%f s\n",(usec2 - usec1)/(double)1000000);
printf("System time:\t%f s\n",(ssec2 - ssec1)/(double)1000000);
printf("Total time:\t%f s\n",((usec2 + ssec2) - (usec1 + ssec1))/(double)1000000);
return 0;
}
However, when I run the same code with printf() function (inside of naiveSearch) uncommented, I get some expected/realistic time. Why the zero time is measured in the first case? How can I reliably measure the time?
Thank you for any idea. P.
UPDATE: There must be some safe and reliable way how to test algorithms in a loop. I need to include all the standard compiler optimizations and still do all the iteration of the same algorithm. Is there any way how to do that? How to test time performance with certainty that time results represent all the iterations performed?