In this following code, I only see, an image is read and written again. But how do the image pixel values get changed so drastically? Apparently, converting the PIL image object to numpy
array causes this but don't know why. I have read the doc for PIL images but didn't see any reasonable explanation for this to happen.
import numpy as np
from PIL import Image
def _remove_colormap(filename):
return np.array(Image.open(filename))
def _save_annotation(annotation, filename):
pil_image = Image.fromarray(annotation.astype(dtype=np.uint8))
pil_image.save(filename)
def main():
raw_annotation = _remove_colormap('2007_000032.png')
_save_annotation(raw_annotation, '2007_000032_output.png')
if __name__ == '__main__':
main()
Input image is,
Here is the output,
Note: The value at the red area in the input image is [128,0,0] and in the output image it's [1,1,1].
The actual source of the code is here.
Edit: As @taras made it clear in his comment,
Basically, palette is a list of 3 * 256 values in form 256 red values, 256 green values and 256 blue values. Your pil_image is an array of greyscale pixels each taking a single value in 0..255 range. When using 'P' mode the pixel value k is mapped to a color (pallette[k], palette[256 + k], palette[2*256 + k]). When using 'L' mode the color is simply k or (k, k, k) in RGB
The segmentation image annotations use a unique color for each object type. So we don't need the actual color palette for the visualization, we get rid of the unnecessary color palette.