1

Hello people from stackoverflow, what I want to do is to process an image with a transformation pixel by pixel to make it darker. The idea is really simple, I have to do something like this:

R *= factor
G *= factor 
B *= factor

Where "factor" is a float number between 0 and 1, and R, G, B are the Rgb numbers for each pixel. To do so, I load an "RGB" file that has three numbers for each pixel from 0 to 255 to an array of char pointers.

char *imagen1, *imagen; 
int resx, resy;      //resolution

imagen1 = malloc....;
//here I load a normal image buffer to imagen1

float factor = 0.5f; // it can be any number between 0 an 1

for(unsigned int i=0; i< 3*resx*resy; i++)
  imagen[i] = (char) (((int) imagen1[i])*factor);

gtk_init (&argc, &argv);
window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
g_signal_connect (window, "destroy", G_CALLBACK (gtk_main_quit), NULL);
pixbuf = gdk_pixbuf_new_from_data (buffer, GDK_COLORSPACE_RGB, FALSE, 8,
                                   resx, resy, (resx)*3, NULL, NULL);
image = gtk_image_new_from_pixbuf (pixbuf);
gtk_container_add(GTK_CONTAINER (window), image);

pixbuf = gdk_pixbuf_new_from_data (imagen, GDK_COLORSPACE_RGB, FALSE, 8,
resx, resy, (resx)*3, NULL, NULL);
gtk_image_set_from_pixbuf(image, pixbuf);

Ignore if the GTK part is not properly written, it displays "imagen". If factor is 1 the image is well displayed, with real colors. The problem is when I use a number between 0 and 1, the image displayed gets very weird colors, like it is "saturated" or the color depth is worse. The further to 1 factor is, the worse the image gets. I don't know why it happens, I thought GTK normalized the RGB values and for that reason the color depth decreased, but I tried adding some white (255, 255, 255) and black (0, 0, 0) pixels and the problem persists. I would like to know what I am doing wrongly, sorry for my English. Thank you in advance!

liberforce
  • 11,189
  • 37
  • 48
Agustín
  • 23
  • 4

1 Answers1

0

Your colour component of a pixel is 8 bit char casted to an int. You are then multiplying by a float, so the int gets first converted to a float and a float is the result. This float is then down casted to a char which going off the representation of a float will look nothing like the int you really want it to look like.

You'll want to cast this result to an int before casting back to char.

Paul Childs
  • 229
  • 2
  • 9