1

I'm trying speed up a simple symmetrically centered image downsampling algorithm in Python. I've coded this up using a naive approach as a lower bound benchmark, however I'd like to get this to work significantly faster.

For simplicity's sake, my image is a circle at a resolution 4608x4608 (I'll be working with resolutions of this scale) with which I would like a downsampled image by a factor of 9 (i.e. 512x512). Below is the code I've generated that creates the image in high resolution and downsamples it by a factor of 9.

All this basically does is maps a pixel from high res. space onto one in low res space (symmetrically around the centroid) and sums all pixels in a given area in high res to that one pixel in low res.

import numpy as np
import matplotlib.pyplot as plt
import time

print 'rendering circle at high res'

# image dimensions.
dim = 4608

# generate high sampled image.
xx, yy = np.mgrid[:dim, :dim]
highRes = (xx - dim/2) ** 2 + (yy - dim/2) ** 2
print 'render done'

print 'downsampling'
t0 = time.time()

# center of the high sampled image.
cen = dim/2
ds = 9

# calculate offsets.
x = 0
offset = (x-cen+ds/2+dim)/ds

# calculate the downsample dimension.
x = dim-1
dsdim = 1 + (x-cen+ds/2+dim)/ds - offset

# generate a blank downsampled image.
lowRes = np.zeros((dsdim, dsdim))

for y in range(0, dim):
    yy = (y-cen+ds/2+dim)/ds - offset
    for x in range(0, dim):
        xx = (x-cen+ds/2+dim)/ds - offset
        lowRes[yy, xx] += highRes[y, x]

t1 = time.time()
total = t1-t0
print 'time taken %f seconds' % total 

I have numpy with BLAS and LAPACK set up on my machine and I know a significant gain can be achieved by taking advantage of this, however I'm a bit stuck on how to proceed. This is my progress so far.

import numpy as np
import matplotlib.pyplot as plt
import time

print 'rendering circle at high res'

# image dimensions.
dim = 4608

# generate high sampled image.
xx, yy = np.mgrid[:dim, :dim]
highRes = (xx - dim/2) ** 2 + (yy - dim/2) ** 2
print 'render done'

print 'downsampling'
t0 = time.time()

# center of the high sampled image.
cen = dim/2
ds = 9

# calculate offsets.
x = 0
offset = (x-cen+ds/2+dim)/ds

# calculate the downsample dimension.
x = dim-1
dsdim = 1 + (x-cen+ds/2+dim)/ds - offset

# generate a blank downsampled image.
lowRes = np.zeros((dsdim, dsdim))

ar = np.arange(0, dim)
x, y = np.meshgrid(ar, ar)

# calculating symettrically centriod positions.
yy = (y-cen+ds/2+dim)/ds - offset
xx = (x-cen+ds/2+dim)/ds - offset

# to-do : code to map xx, yy into lowRes

t1 = time.time()
total = t1-t0

print 'time taken %f seconds' % total

This current version is about 16x faster on my machine but it is not complete. I'm not exactly sure how to map the new downsampled pixels from the high res. image efficiently.

There might be another way to speed it up? Not sure... Thanks!

Ian Daz
  • 143
  • 1
  • 11
  • 1
    For a given pixel in the downsampled image, what pixels in the full res image are to be summed? I'm just not 100% sure what you mean by "symmetrically centered." – derricw Apr 09 '15 at 17:57
  • Symmetric Cantering is correct and works what I'm not sure about is a fast way to map the pixel pos that are defined in xx and yy from high res to low res. – Ian Daz Apr 09 '15 at 18:43
  • Have you noticed the artifact at the bottom of your output image when you run the top code segment? In fact, I'm getting lots of artifacts on the edges of my output image from this algorithm when I change the input image dimension to other sizes. This is why I'm asking how exactly you want the summation to work. – derricw Apr 09 '15 at 18:52
  • I think you can downsample each axis independently using `numpy.add.reduceat`. –  Apr 09 '15 at 19:03
  • 2
    When I search for "symmetrically centered downsampling" the first result is this question. If you clarify what your downsampling method is trying to do (or avoid doing) you are likely to find that you just want to use a standard interpolation function with an appropriate order of interpolation. I think you may just be unfamiliar with the standard terms and concepts. At "worst" if you really do have niche requirements you can use scikit image's [`warp`](http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.warp) function passing in the inverse coordinate transformation. – YXD Apr 09 '15 at 20:34
  • 1
    Also in resampling you need to iterate over the output pixels and sample the input image (with the interpolation method of your choice) using the inverse transform. This is seen in the scikit-image function parameters. See some notes here https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/warp/warp.pdf – YXD Apr 09 '15 at 20:42

0 Answers0