3

I've been playing around with OpenGL for a full week or equivalent. After 2D I'm now trying 3D. I want to reproduce the 3D scene you can see in the third video on http://johnnylee.net/projects/wii/.
I've had a hard time making everything work properly with textures and depth.

I've had recently 2 problems that have somewhat visually the same impact :

  • One with textures that do not blend well in 3D using the techniques I've found for 2D.
  • One with objects appearing bottom above top. Like the problem exposed here: Depth Buffer in OpenGL

I've solved both problem, but I would like to know if I get things right, especially for the second point.


For the first one, I think I've got it. I have an image of a round target, with alpha for anything outside the disc. It's loaded fine inside OpenGL. Some (due to my z-ordering problem) other targets behind it suffered from being hidden by the transparent regions of the naturally square quad I used to paint it.

The reason for that was that every part of the texture is assumed to be full opaque with regard for the depth buffer. Using an glEnable(GL_ALPHA_TEST) with an test for glAlphaFunc(GL_GREATER, 0.5f) makes the alpha layer of the texture act as a per pixel (boolean) opacity indicator, and thus makes the blending quite useless (because my image has boolean transparency).

Supplementary question: By the way, is there a mean of specifying a different source for the alpha test than the alpha layer used for blending?


Second, I've found a fix to my problem. Before clearing the color and depth buffer I've set the default depth to 0 glClearDepth(0.0f) and and I've make use of "greater" depth function glDepthFunc(GL_GREATER).

What looks strange to me is that depth is 1.0 and the depth function is "less" GL_LESS by default. I'm basically inverting that so that my objects don't get displayed inverted...

I've seen nowhere such a hack, but in the other hand I've seen nowhere objects getting drawn systematically in the wrong order, regardless of which order I draw them!


OK, here's the bit of code (stripped down, not too much I hope) that is now working as I want:

    int main(int argc, char** argv) {
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
        glutInitWindowSize(600, 600); // Size of the OpenGL window
        glutCreateWindow("OpenGL - 3D Test"); // Creates OpenGL Window
        glutDisplayFunc(display);
        glutReshapeFunc(reshape);

        PngImage* pi = new PngImage(); // custom class that reads well PNG with transparency
        pi->read_from_file("target.png");
        GLuint texs[1];
        glGenTextures(1, texs);
        target_texture = texs[0];
        glBindTexture(GL_TEXTURE_2D, target_texture);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D, 0, pi->getGLInternalFormat(), pi->getWidth(), pi->getHeight(), 0, pi->getGLFormat(), GL_UNSIGNED_BYTE, pi->getTexels());

        glutMainLoop(); // never returns!
        return 0;
    }

    void reshape(int w, int h) {
        glViewport(0, 0, (GLsizei) w, (GLsizei) h);
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        gluOrtho2D(-1, 1, -1, 1);
        gluPerspective(45.0, w/(GLdouble)h, 0.5, 10.0);
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
    }

    void display(void) {
        // The stared *** lines in this function make the (ugly?) fix for my second problem
        glClearColor(0, 0, 0, 1.00);
        glClearDepth(0);          // ***
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glShadeModel(GL_SMOOTH);
        glEnable(GL_DEPTH_TEST);
        glEnable(GL_DEPTH_FUNC);  // ***
        glDepthFunc(GL_GREATER);  // ***

        draw_scene();

        glutSwapBuffers();
        glutPostRedisplay();
    }

    void draw_scene() {
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        gluLookAt(1.5, 0, -3, 0, 0, 1, 0, 1, 0);

        glColor4f(1.0, 1.0, 1.0, 1.0);
        glEnable(GL_TEXTURE_2D);
        // The following 2 lines fix the first problem
        glEnable(GL_ALPHA_TEST);       // makes highly transparent parts
        glAlphaFunc(GL_GREATER, 0.2f); // as not existent/not drawn
        glBindTexture(GL_TEXTURE_2D, target_texture);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // Drawing a textured target
        float x = 0, y = 0, z = 0, size = 0.2;
        glBegin(GL_QUADS);
        glTexCoord2f(0.0f, 0.0f);
        glVertex3f(x-size, y-size, z);
        glTexCoord2f(1.0f, 0.0f);
        glVertex3f(x+size, y-size, z);
        glTexCoord2f(1.0f, 1.0f);
        glVertex3f(x+size, y+size, z);
        glTexCoord2f(0.0f, 1.0f);
        glVertex3f(x-size, y+size, z);
        glEnd();
        // Drawing an textured target behind the other (but drawn after)
        float x = 0, y = 0, z = 2, size = 0.2;
        glBegin(GL_QUADS);
        glTexCoord2f(0.0f, 0.0f);
        glVertex3f(x-size, y-size, z);
        glTexCoord2f(1.0f, 0.0f);
        glVertex3f(x+size, y-size, z);
        glTexCoord2f(1.0f, 1.0f);
        glVertex3f(x+size, y+size, z);
        glTexCoord2f(0.0f, 1.0f);
        glVertex3f(x-size, y+size, z);
        glEnd();
    }
Community
  • 1
  • 1
ofavre
  • 4,488
  • 2
  • 26
  • 23

4 Answers4

4

Normally the depth clear value is 1 (effectively infinity) and the depth pass function is LESS because you want to simulate the real world where you see things that are in front of the things behind them. By clearing the depth buffer to 1, you are essentially saying that all objects closer than the maximum depth should be drawn. Changing those parameters is generally not something you would want to do unless you really understand what you are doing.

With the camera parameters you are passing to gluLookAt and the positions of your objects, the z=2 quad will be further from the camera than the z=0 object. What are you trying to accomplish such that this doesn't seem correct?

The standard approach to achieve order-correct alpha blending is to render all opaque objects, then render all transparent objects back to front. The regular/default depth function would always be used.

Also note that you may get some weird behavior from the way you are setting up your perspective matrix. Normally you would call gluOrtho OR gluPerspective. But not both. That will multiply the two different perspective matrices together which is probably not what you want.

Alan
  • 4,897
  • 2
  • 24
  • 17
  • I answer paragraph by paragraph: 1) I only inverted it to invert the inversion! 2) I'm testing the render of two objects, one behind another, the one at z=2 shouldn't be seen, but it was before the fix of paragraph 1. 3) That I knew as I read it, but I don't quite need it. 4) *That was the trick! It's now working without having to invert the depth!!* I was basically a 2D code that I've adapted to draw 3D and I left out this stupid line. Thanks! – ofavre Nov 16 '10 at 22:22
0

Formulas implemented in standard glOrtho function map zNear to -1 and zFar to 1, which are by default mapped to window coordinates as [0,1] (changeable via glDepthRange for fixed pipeline, not sure if that function is still supported). Depth test works in those terms indeed. The way around this is just assume that zNear is furthest away from projection plane, or generate matrix yourself,which would need anyway , if you want to get rid of legacy pipeline.

Swift - Friday Pie
  • 12,777
  • 2
  • 19
  • 42
0

Supplementary question: By the way, is there a mean of specifying a different source for the alpha test than the alpha layer used for blending?

Yes, if you use a shader you can compute yourself the alpha value of the output fragment.

elmattic
  • 12,046
  • 5
  • 43
  • 79
  • I've added a mouse selection, using `glRenderMode(GL_SELECT)`, but I came across one problem: the `ALPHA_TEST` is not used in this mode (blending disabled or not). This post http://stackoverflow.com/questions/1189891/opengl-selection-with-alpha-test#1194114 seems to state otherwise, although he didn't test it. If it helps, I have an ATI Mobility Radeon HD 2600 card, under Kubuntu Lucid Lynx 10.04.1 LTS, using the fglrx proprietary drivers version 2:8.723.1-0ubuntu5. (I mention it just in case, because I've noticed that polygon smoothing works well with an older nVidia and not this ATI card...) – ofavre Nov 16 '10 at 22:33
  • I don't understand want do you want to do with the alpha. – elmattic Nov 17 '10 at 08:33
0

Regarding the second problem: There is very probably something wrong with your modelViewProjection matrix.

I've had the same problem (and could "fix" it with your hack) wich was caused by me employing a matrix that was somehow weirdly wrong. I solved it be implementing my own matrix generation.

fzwo
  • 9,842
  • 3
  • 37
  • 57