I'm making an image and video converter in Python. The basic approach is:
- Create a list which stores the 292 colors that comprise my output 'palette.'
- Create a dict which serves as a cache of previous color comparisons
- Shrink image/frame to smaller dimensions (max width: 132)
- Iterate over every pixel in the image.
- For each pixel, check if its color is in the cache. If so, use the palette color defined in the cache.
- If the pixel's color is not found in the cache, compare it to each of the 292 colors in the palette list using a variation of the algorithm here.
- Choose the palette color which has lowest distance.
So I end up with a for loop that calls the color comparison function each time. Here's an approximation:
possibles = [ list of color dicts here ]
match_cache = { }
def color_comparison( pixel, possibles ):
closest_distance = INFINITY
closest_possible = 0
for possible in possibles:
d = color_distance( pixel, possible )
if d < closest_distance:
closest_distance = d
closest_possible = possible
hash = pixel.makeHash()
match_cache[hash] = closest_possible
return closest_possible
def image_convert( image ):
output = []
for pixel in image:
hash = pixel.makeHash()
if hash in match_cache:
output.append( match_cache[hash] )
else:
new_color = color_comparison( pixel, possibles )
output.append( new_color )
return output
My question is: how can I make this faster? Is there some better approach rather than iterating over every possible for every pixel?