0

I'm writing a script to check if an image is normalized or not. I'm using Python PNG module to analyze the image. To test it I created a 16-bit image consisting of a 2 pixels line with a black and white pixel in Photoshop. My script correctly identifies the black pixel (0), but it gives a different value (65533) than I was expecting (65535) for the white pixel.

I can't understand why this happens. Is there something wrong with my script or it is related with the way Photoshop saves the image?

Minimalistic test png image: https://i.stack.imgur.com/UgfhF.png

Script:

#!/usr/bin/python

import sys
import png # https://pypi.python.org/pypi/pypng

if len(sys.argv) != 2:
    print "Invalid number of arguments (",len(sys.argv),").\nUsage: python getMinMaxColor.py png_file"
    sys.exit(-1)
pngFilePath = sys.argv[1]

f = open(pngFilePath, 'rb')
r = png.Reader(file=f)
k = r.read()

bitDepth = 16
if k[3]['bitdepth'] != None:
    bitDepth = k[3]['bitdepth']

absMaxColor = 2**bitDepth-1

maxColor = -1
minColor = absMaxColor+1
print "Content:"
for line in k[2]:
    for color in line:
        print color
        if (color > maxColor):
            maxColor = color
        if (color < minColor):
            minColor = color

f.close()

print "\n"

print "Min Color:", minColor
print "Max Color:", maxColor, "( max:", absMaxColor, ")"
if minColor == 0 and maxColor == absMaxColor:
    print "Image is normalized"
else:
    print "Image is not normalized"
David Jones
  • 4,766
  • 3
  • 32
  • 45
andresp
  • 1,624
  • 19
  • 31

1 Answers1

2

It seems the PNG file really has the 65533 value stored for the white pixel instead of 65535. I assume this has something to do with the fact that Photoshop in reality works with 15 bits in the "16 bit mode", so there is a small inaccuracy when saving 16 bit greyscale images.

Cito
  • 5,365
  • 28
  • 30