I am trying to understand how the data obtained from XGetImage
is disposed in memory:
XImage img = XGetImage(display, root, 0, 0, width, height, AllPlanes, ZPixmap);
Now suppose I want to decompose each pixel value in red, blue, green channels. How can I do this in a portable way? The following is an example, but it depends on a particular configuration of the XServer and does not work in every case:
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++) {
unsigned long pixel = XGetPixel(img, x, y);
unsigned char blue = pixel & blue_mask;
unsigned char green = (pixel & green_mask) >> 8;
unsigned char red = (pixel & red_mask) >> 16;
//...
}
In the above example I am assuming a particular order of the RGB
channels in pixel
and also that pixels are 24bit-depth: in facts, I have img->depth=24
and img->bits_per_pixels=32
(the screen is also 24-bit depth). But this is not a generic case.
As a second step I want to get rid of XGetPixel and use or describe img->data directly. The first thing I need to know is if there is anything in Xlib
which exactly gives me all the informations I need to interpret how the image is built starting from the img->data
field, which are:
- the order of R,G,B channels in each pixel;
- the number of bits for each pixels;
- the numbbe of bits for each channel;
- if possible, a corresponding FOURCC