7

If I were building a web service that used a number of photos to illustrate a service, it would be useful to actually detect whether photos are in focus or not.

Is there any way of doing this programatically? (Even better, is there an open source implementation of such a routine?)

Tom Morris
  • 3,979
  • 2
  • 25
  • 43
  • Perhaps add the language you are working in – Oskar Kjellin Sep 25 '10 at 11:02
  • 3
    I don't think it really matters what language is being used in this question, this is more of a mathematical problem. (Unless, of course, the intended language is unusably slow for number-crunching) – Matti Virkkunen Sep 25 '10 at 11:05
  • 1
    @Matti but he is also looking for an open source implementation which might be more suitable for certain languages. Also, some frameworks are more suited for this – Oskar Kjellin Sep 25 '10 at 11:12
  • The web service is built in Ruby on Rails, but if I were to implement this, it'd be as a background job and could be written in pretty much any language that can run on a Linux box (either by shelling out to a subprocess or by using something like RubyInline to compile C code into a Ruby class). – Tom Morris Sep 25 '10 at 22:00

2 Answers2

4

How do you know it is in focus? You recognize the object, of course, but more generally, because it has detail. Detail, typically, means drastic change in color over a short range of pixels. I'm sure you can find a lot of edge detection algorithms out there via google. Without giving it much thought:

edgePixelCount = 0;

for each pixel in image
{
    mixed = pixel.red + pixel.blue + pixel.green;
    for each adjacentPixel in image.adjacentPixels(pixel)
    {
        adjacentMixed = 
           adjacentPixel.red + 
           adjacentPixel.blue + 
           adjacentPixel.green;
        if ( abs ( adjacentMixed - mixed ) > EDGE_DETECTION_THRESHOLD )
        {
             edgePixelCount++;
             break;
        }
    }
}

if (edgePixelCount > NUMBER_OF_EDGE_PIXELS_THRESHOLD)
{
     focused = true;
}

Note: you'd probably need to use "adjacent pixels" with some distance, not just immediate edge pixels. Even in focus, high res images could often have gradients.

PatrickV
  • 2,057
  • 23
  • 30
  • 1
    I once implemented an edge detect with an off-by-one error that effectively added the original image to the edges. When applied to a picture of a classmate in the foreground, in focus, with other people in the background, out of focus, my classmate looked 90 years old, while the other people were unchanged. – Chris Sep 25 '10 at 19:27
  • Excellent answer. I'll probably run edge detection over the images and see if it can detect the blurry photos and pick out, say, the bottom x% of them so we can rephotograph them. – Tom Morris Sep 25 '10 at 22:30
3

Look into real edge detection methods - using laplacian filters, guassian filters, LoG (laplacian of gaussian), etc. These methods are much more tweakable to fit your specific cases than PatrickV's simple (albeit elegant) method.

Michael Kopinsky
  • 884
  • 1
  • 7
  • 14