I have two images of a shooting target (4 meters by 4 meters), divided into sections of 0.5 meter by 0.5 meter squares. The images are taken before and after a firing trial. The target has already bullet holes on it before the firing. Moreover, there is some clutter on or in front of the target (fixing screws and steel lines to hold target straight). Let us assume all bullet holes are visible on both images. How can I programmatically identify bullet holes by comparing before and after images? Can you specify tools or libraries, or algorithm steps?
-
Did you try simple lapgauss followed by thresholded binarisation? Connected components search with size range definition will find you all holes and other dark objects with size in given range. After that just calculate a mass centers for each object and compare coordinates. – Eddy_Em Jan 10 '16 at 07:30
-
Can we see the two images, it would help a lot! – FiReTiTi Jan 10 '16 at 08:26
-
@FiReTiTi you can see at https://drive.google.com/file/d/0B0Lv6JdbZJRHbVhlV09XNEE0SjA/view?usp=sharing – niw3 Jan 10 '16 at 09:22
-
OpenCV is **the goto** library for Computer Vision problems such as this. You will need to look at image registration to align before and after shots to recognise differences. – Mark Setchell Jan 10 '16 at 10:34
-
Thanks for the images, but with such small dimensions/resolutions, it's not possible to perform any tests (and the solution does not seem hard). Can you give a real example with higher resolution images? – FiReTiTi Jan 11 '16 at 09:40
-
Actual images have higher resolutions. I can't share them due to confidentiality. – niw3 Jan 11 '16 at 10:21
1 Answers
A possible approach would consist in the following steps:
perform image registration in order to have both images seen from the same angle. Here, you'll need to find the combination of rotation, scaling and translation that relates one view to the other. See for example http://scikit-image.org/docs/dev/auto_examples/transform/plot_matching.html#example-transform-plot-matching-py that determines the transformation from a set of points of interest (corners for example). (The transformation that you need might be a bit more complex than the one of the example, since the rotation is in 3D for your images and not only in 2D.).
once you have aligned the images together, you can try different approaches. One of them is to detect the holes in both images, with a segmentation method. Since the holes seem to be lighter, you can try thresholding the image (http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_local_otsu.html) and maybe cleaning the result with mathematical morphology (http://www.scipy-lectures.org/packages/scikit-image/index.html#mathematical-morphology). Then, for each hole of the target after, you can try to match it with a hole in the target before, for example by picking the closest center of mass in the target before and computing the cross-correlation between a given patch around the hole, in the two images.
I've given a few links to scikit-image examples, but openCV is often cited as the reference library for computer vision.

- 831
- 5
- 5