1

I am trying to complete some basic example in OpenGL from the OGLSuperbible. Basically, I am going to make the background shift from red to orange to green and back.

Here is the code that I am using:

typedef float F32;

void Example1::Render(void) {
time += 0.20f;
const GLfloat color[] = { (F32)sin(time) * 0.5f + 0.5f,
                          (F32)cos(time) * 0.5f + 0.5f,
                           0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);
}

I have a precision timer that will measure the delta time of the last frame, but, anytime that I call the sin and cos functions with anything less than 1, it just keeps the screen at green. However, If I hard code the value to change by, as I have, if I increase it by 1 or more, it will flash between the colors very quickly (like a rave). I am not sure why the functions wont work for floating point numbers. I am using visual studio, and have included the math.h header. Has anyone seen anything like this before?

Update: Based of suggestions, I have tried a few things with the code. I got the program to have the effect that I was looking for by adding the following:

Testing my code, I manually input the follow:

In the constructor:

Example1(void): red(0.0f), green(1.0f), interval(0.002f), redUp(true), greenUp(false).....

In the render loop

if (red >= 1.0f) { redUp = false; }
else if (red <= 0.0f) { redUp = true; }

if (green >= 1.0f) { greenUp = false; }
else if (green <= 0.0f) { greenUp = true; }

if (redUp) { red += interval; }
else { red -= interval; }

if (greenUp) { green += interval; }
else { green -= interval; }

const GLfloat color[] = { red, green, 0.0f, 1.0f };

It does what its supposed to, but using the sin and cos functions with the floating point numbers has no change. I am baffled by why, I has assumed that giving sin and cos the floating point values that are time would work. I have tried counting time manually, incrementing it by 1/60th of a second manually, but any time I use sin and cos with anything less than 1.0f, it just remains green.

3 Answers3

0

It looks like your time interval is much too large. If you're just assuming you're running at 60fps (It could be hundreds if you're not restricting it) then delta time should be 0.01667 (1/60) seconds per frame. Incrementing by 0.2 every frame (especially if your refresh rate is over 60fps) will result in strobing.

If you're using C++11, I'd suggest using the Chrono libraries and get exact numbers to use. That's well documented in this post.

Once you're using the actual time, remember that sin and cos take radians, not degrees. The range is only [0, 2PI) so in addition to passing in just your time variable, multiply it with some small factor and play around with that number:

time += 0.01667f;
F32 adjustedTime = time * 0.02; // OcillationSpeed
const GLfloat color[] = { (F32)sin( adjustedTime ) * 0.5f + 0.5f,
                          (F32)cos( adjustedTime ) * 0.5f + 0.5f,
                          ...

Also, I don't know if you just neglected adding it to this question but don't forget to call glClear(GL_COLOR_BUFFER_BIT); after the clear color is set.

Community
  • 1
  • 1
Foggzie
  • 9,691
  • 1
  • 31
  • 48
  • Thank you for your suggestion, I have tried your idea but the screen stays the same color still. When I add a break point, and then look at the values, it looks like its because the drift in color between frames is not enough to actually change the color. The color ends up being at about (0.55, .99, 0.0) and (0.59, 0.99. 0.0). I am sure that I am just not understanding something here, what did you see the actually color ending up being with your code? Or did you go as far as to try it? – Maxwell Miller May 17 '15 at 03:09
  • The change from (0.55, 0.99, 0.0) to (0.59, 0.99, 0.00) is well visibly and actually quite fast if one is to assume 60 fps animation. – derhass May 17 '15 at 12:57
  • I feel like I am missing something big. When I run the program, the window that opens stays green, if the time is anything less than 1. If it is greater than 1, then I get the flashing affect. Anyone see what I am missing? I am sure the error is in my thinking. – Maxwell Miller May 17 '15 at 18:15
  • @MaxwellMiller: is your `time` variable actually a floating point type? – derhass May 17 '15 at 22:12
  • Yes, I have tried it both as a double and as a float. Even as a double or float, if the value is equal to 1.0f, it will change the color rapidly, but if the value is anything less than 1.0f, including 0.999999f, then it just stays green, and does not change at all. I really don't get it. – Maxwell Miller May 18 '15 at 01:58
  • @MaxwellMiller You're setting the clear color but are you actually calling `glClear(GL_COLOR_BUFFER_BIT);` anywhere? – Foggzie May 18 '15 at 20:33
0

I think you need to refresh the screen by glutSwapBuffers(); I have this code and I can change the background color without difficulties:

void RenderScene(void)
    {
    // Clear the window with current clearing color
    //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

        time+=0.2;
        const GLfloat color[]={sin(time)*0.5f+0.5f,cos(time)*0.5f+0.5f,0.0f,1.0f};
        glClearBufferfv(GL_COLOR,0,color);
    GLfloat vRed[] = { 1.0f, 0.0f, 0.0f, 1.0f };
    shaderManager.UseStockShader(GLT_SHADER_IDENTITY, vRed);
    triangleBatch.Draw();



    // Perform the buffer swap to display back buffer

    glutSwapBuffers();
    }

I get these results enter image description here

Good Luck
  • 1,104
  • 5
  • 12
  • I am actually calling swap buffers in my main loop, although I did consider that being where the issue was. Below I have the solution to what my issues was, but I appreciate the insight. – Maxwell Miller May 19 '15 at 14:27
0

Thanks for all the input. With much digging, I was able to find what my issue was. I thought it was logical, but it was not. It was a syntax error. Here is the code that produces that same results that you are seeing:

    const GLfloat color[] = { F32(sin(curTime)) * 0.5f + 0.5f,
                          F32(cos(curTime)) * 0.5f + 0.5f,
                          0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);

The difference is using (F32)sin.... or F32(sin(... I am not 100% why this works, but I think it has something to do with how the constructor is being called on the typedef that I have set up for the F32 type, which I did in the first place because of some advice in a book called Game Engine Architecture by Jason Gregory. Anyway, thanks for working through it with me.