0

I'm trying to calculate the framerate of a GLUT window by calling a custom CalculateFrameRate method I made at the beginning of my Display() callback function. I call glutPostRedisplay() after calculations I perform every frame so Display() gets called for every frame.

I also have an int numFrames that increments every frame (every time glutPostRedisplay gets called) and I print that out as well. My CalculateFrameRate method calculates a rate of about 7 fps but if I look at a stopwatch and compare it to how quickly my numFrames incrementor increases, the framerate is easily 25-30 fps.

I can't seem to figure out why there is such a discrepancy. I've posted my CalcuateFrameRate method below

clock_t lastTime;
int numFrames;

//GLUT Setup callback
void Renderer::Setup()
{
    numFrames = 0;
    lastTime = clock();  
}

//Called in Display() callback every time I call glutPostRedisplay()
void CalculateFrameRate()
{
    clock_t currentTime = clock();
    double diff = currentTime - lastTime;
    double seconds = diff / CLOCKS_PER_SEC;
    double frameRate = 1.0 / seconds;
    std::cout<<"FRAMERATE: "<<frameRate<<endl;

    numFrames ++;
    std::cout<<"NUM FRAMES: "<<numFrames<<endl;
    lastTime = currentTime;
}
user1782677
  • 1,963
  • 5
  • 26
  • 48

1 Answers1

4

The function clock (except in Windows) gives you the CPU-time uses, so if you are not spinning the CPU for the entire frame-time, then it will give you a lower time than expected. Conversely, if you have 16 cores running 16 of your threads flat out, the time reported by clock will be 16 times the actual time.

You can use std::chrono::steady_clock, std::chrono::high_resolution_clock, or if you are using Linux/Unix, gettimeofday (which gives you microosecond resolution).

Here's a couple of snippets of how to use gettimeofday to measure milliseconds:

double time_to_double(timeval *t)
{
    return (t->tv_sec + (t->tv_usec/1000000.0)) * 1000.0;
}

double time_diff(timeval *t1, timeval *t2)
{
    return time_to_double(t2) - time_to_double(t1);
}

gettimeofday(&t1, NULL);
... do stuff ...
gettimeofday(&t2, NULL);
cout << "Time taken: " << time_diff(&t1, &t2) << "ms" << endl;

Here's a piece of code to show how to use std::chrono::high_resolution_clock:

auto start = std::chrono::high_resolution_clock::now();
 ... stuff goes here ... 
auto diff = std::chrono::high_resolution_clock::now() - start;
auto t1 = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);
Mats Petersson
  • 126,704
  • 14
  • 140
  • 227
  • Thanks! Using your first suggestion it calculates my new framerates as being between 25-30 fms which is consistent with the speed I was calculating with my stopwatch. – user1782677 Oct 09 '14 at 08:15
  • 2
    On Linux **don't** use gettimeofday, because the value reported by that function is subject to adjustments (NTP, user changed the clock, etc). *You want to use `clock_gettime(CLOCK_MONOTONIC, …)`. – datenwolf Oct 09 '14 at 09:10
  • For the purpose of getting a timestamp to calculate frame-rate during development, `gettimeofday` will be fine - because happening upon the erroneous things are a relatively rare occurrance. If you are writing an application that uses time to determine for example the progress of the enemy during the game, it's clearly not great to use `gettimeofday` for the reasons stated. – Mats Petersson Oct 09 '14 at 20:40