1

I wish to capture an image I rendered rendered in openGL. I use glReadPixels and then save the image with CImg. Unfortunately, the result is wrong. See below. The image on the left is correct. I captured it with GadWin PrintScreen. The image on the right is incorrect. I created it with glReadPixels and CImg:

enter image description here enter image description here

I've done a lot of Web research on what might be wrong, but I'm out of avenues to pursue. Here the code the captures the image:

void snapshot() {
    int width = glutGet(GLUT_WINDOW_WIDTH);
    int height = glutGet(GLUT_WINDOW_HEIGHT);
    glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
    glPixelStorei(GL_PACK_ALIGNMENT, 1);
    int bytes = width*height*3; //Color space is RGB
    GLubyte *buffer = (GLubyte *)malloc(bytes);

    glFinish();
    glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
    glFinish();

    CImg<GLubyte> img(buffer,width,height,1,3,false);
    img.save("ScreenShot.ppm");
    exit(0);
}

Here is where I call the snapshot method:

void display(void) {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        drawIndividual(0);
        snapshot();
        glutSwapBuffers();
    }

Following up on a comment, I grabbed the bit depth and printed to the console. Here are the results:

redbits=8
greenbits=8
bluebits=8
depthbits=24
ahoffer
  • 6,347
  • 4
  • 39
  • 68

2 Answers2

5

You're clearly getting the image from the framebuffer, but the row length seems off. You set the packing alignment (good), but there are more settable parameters:

  • GL_PACK_ROW_LENGTH
  • GL_PACK_SKIP_PIXELS
  • GL_PACK_SKIP_ROWS

set them to 0 for tight packing. Also the glFinish() after glReadPixels() is superflous. glReadPixels doesn't work asynchronously – it returns only after the read data has been copied into the target buffer (or in the case of writing to a PBO, fetching the data from the PBO will wait for the operation to finish).

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Thanks datenwolf. I set the PACK parameters to 0. (I also removed redundant glFinish() call. The output image is no different. It still looks the same as in the question. Could CImg be at fault? Or do you think the glReadPixel calls is the problem? – ahoffer Jul 27 '11 at 17:09
  • 1
    @ahoffer: Yes CImg may be the culprit. I didn't notice it first, but you're template instancing it with but you read unsigned bytes. However I don't know CImg, so this may, or may not be the reason. Try changing to CImg. – datenwolf Jul 29 '11 at 06:24
  • Thanks. I had fixed in my code, but forgot to update StackOverflow. I met with a colleague today who told me to stop being lazy and set up a decent debug environment. I'll go back and use glDrawPixels to render what is in my pixel buffer so I can compare the images in real-time. If that works, then I know I'm using the wrong CImg parameters. I'll also just stick to rendering a single triangle or square. – ahoffer Jul 29 '11 at 20:39
  • 1
    @ahoffer: For debugging purposes the best thing to render is a single, 1px wide, vertical, white line on black background. Doing it that way literally tells you, what's going wrong. Technically this is sending an impulse through the processing chain, giving you an impulse response. You expect an identity mapping, and if the impulse comes out unaltered, everything matches. If however the pulse changes, the changes tell you, what kind of error you're facing. – datenwolf Jul 29 '11 at 23:42
2

CImg does not interleave its colors. That is, 3 red, green, blue pixels would be stored linearly as:

R1, R2, R3, ..., G1, G2, G3, ..., B1, B2, B3, ...

However, OpenGL's glReadPixel and glDrawPixel expect interleaved color components like this:

R1, G1, B1, R2, G2, B2, ...

Additionally, OpenGL puts the origin (0,0) at the lower left corner of an image. CImg uses the more approach where the origin is at the top left of the image.

Here is a routine I wrote convert the interleaved colors to CImg's channel-oriented format.

void convertToNonInterleaved(int w, int h, unsigned char* tangled, unsigned char* untangled) {
    //Take string in format R1 G1 B1 R2 G2 B2... and re-write it 
    //in the format format R1 R2 G1 G2 B1 B2... 
    //Assume 8 bit values for red, green and blue color channels.
    //Assume there are no other channels
    //tangled is a pointer to the input string and untangled 
    //is a pointer to the output string. This method assumes that 
    //memory has already been allocated for the output string.

    int numPixels = w*h;
    int numColors = 3;
    for(int i=0; i<numPixels; ++i) {
        int indexIntoInterleavedTuple = numColors*i;
        //Red
        untangled[i] = tangled[indexIntoInterleavedTuple];
        //Green
        untangled[numPixels+i] = tangled[indexIntoInterleavedTuple+1];
        //Blue
        untangled[2*numPixels+i] = tangled[indexIntoInterleavedTuple+2];
    }
}

I should have probably added the code to account for the change in origin, but I felt lazy and decided to use CImg to do that. Once this routine is run, this code creates the image object:

unsigned char* p = (unsigned char *)malloc(bytes);
convertToNonInterleaved(width, height, buffer, p);
CImg<unsigned char> img(p, width, height,1,3);
free(p);
img.mirror('y');
CImgDisplay main_disp(img,"Snapshot");
while (!main_disp.is_closed() ) {
    main_disp.wait();
}

And as far as I can tell, I do not have to set any packing parameters. Here is the code I use to grab the pixels from the OpenGL render target:

bytes = width*height*3; //Color space is RGB
if(buffer) 
    free(buffer);
buffer = (GLubyte *)malloc(bytes);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
ahoffer
  • 6,347
  • 4
  • 39
  • 68