0

I am working on a community project which goal is to reduce the speeding violations. In order to recognize the car's license plates I am using OpenALPR. The problem is that it is sensitive to camera position, that is the angle and OpenALPR have troubles detecting the LP when the angle is greater than 20 degrees (yes, I've read the recommendations about the good camera position but IRL sometimes they cannot be satisfied).

I found that the problem is that the LP area is not detected. However, cropping manually the image to contain just the car without any other modifications of the pixels (like filtering) fixes the problem and OpenALPR is able to detect the LP area.

I am looking for a solution that can do the cropping automatically. Either algorithm or tool that can compare two images "base" and "target" and return the coordinates (top left, bottom right) of the changed area in the target image.

Alternative solution would be different configuration file for the OpenALPR. I am experimenting with this last few hours but with no success.

Base image will look like: enter image description here

Target image will look like: enter image description here

(these are just two frames from a video)

(original image size is much bigger, i.e. 3840x2160)

Are there algorithm(s) or tools that can help me with automating this task?

Ognyan
  • 13,452
  • 5
  • 64
  • 82
  • any reason why you don't use state of the art speed control? why develop your own solution for a problem that has been solved for decades? and why don't you make sure you always have a proper image of the LP instead of trying to get it from a suboptimal image? – Piglet Nov 05 '17 at 09:30
  • @Piglet As I mentioned it is a community project, so it must be a cheap solution, i.e. we can't afford "state of the art speed control". About the _proper image_: most of the time there is no suitable space where camera can be positioned. – Ognyan Nov 05 '17 at 09:44
  • If you want to find the coordinates of the car, why not simply subtract the two frames and find bounding box ? – I.Newton Nov 05 '17 at 09:46
  • @I.Newton Simple substraction won't do I think. There is a slight variation in the background between the two images. There is a need for more sophisticated substraction which is why I've posted this question - I don't have the knowledge how to write a solution by myself. – Ognyan Nov 05 '17 at 09:48
  • 1
    Check out for MOG background subtraction. That might work for you. It handles slight variations in the background. You'll have to do some morphological and other operations to segment out the car bounding box. – I.Newton Nov 05 '17 at 09:58
  • 1
    Sorry maybe it's different where you live. In Germany speed control earns money for communities. so costs of the device don't matter. restrict your field of view to 1 or 2 lanes and you gain a lot of lateral resolution. triggering the acquisition properly will ensure the LP is always in the same area. – Piglet Nov 05 '17 at 10:24

1 Answers1

1

The basic method is by differencing, i.e. taking the absolute difference of the RGB component values pixel per pixel. Where differences are large, there is a detection.

But this can work poorly (and it does with the given images) because the two pictures may be slightly unaligned, and wind can move the vegetation.

So I recommend to

  • reduce the image resolution by a significant factor (say 8);

  • blur the reduced images;

  • compute the absolute differences;

  • keep the largest differences among the components;

  • binarize with a threshold;

  • finally use connected components labelling to find the most significant blob and eliminate the residual interferences.

enter image description here

Make sure to refresh the background image (when you are sure there is no car) to avoid the effect of daily drift (there are always slow changes). It may also be useful to normalize the image intensity to thwart the changes is ambiant lighting (passing clouds f.i.).