1

I have tried 3 algorithms:

  1. Compare by Compare_ssim.
  2. Difference detection by PIL (ImageChops.difference).
  3. Images subtraction.

The first algorithm:

(score, diff) = compare_ssim(img1, img2, full=True)
diff = (diff * 255).astype("uint8")

The second algorithm:

from PIL import Image ,ImageChops
img1=Image.open("canny1.jpg")
img2=Image.open("canny2.jpg")
diff=ImageChops.difference(img1,img2)
if diff.getbbox():
    diff.show()

The third algorithm:

image3= cv2.subtract(image1,image2)

The problem is these algorithms are so sensitive. If the images have different noise, they consider that the two images are totally different. Any ideas to fix that?

fracv
  • 17
  • 6
  • 1
    This is a really hard problem. If your problem is noise, you can make it more robust by blurring the images first. This will reduce the level of detail, of course. I'm afraid that you've underestimated the difficulty of the problem. Is it a toy exercise? Do you want to generalize the approach to other images as well? Knowing your data (pairs of images you want to compare) and exploiting similarities could be the key here. – pasbi Jan 22 '20 at 13:35
  • you have to define what kind of differences you want to detect and which ones you don't want to detect. Like the alignment of the image, the noise, the colors, just the graph-structure / connectivity of the object, etc.. "difference" is very domain and task specific. – Micka Jan 22 '20 at 17:02
  • @pasbi No, I want this approach for these two images only. Could you explain how to exploit the similarities? Do you mean to use feature matching to identify the similarities? – fracv Jan 23 '20 at 10:51
  • @micka As I mentioned in my question, the difference I want to detect is any change in the provided object in the two images whether in color or number of the parts of the object. – fracv Jan 23 '20 at 11:07
  • I'll vote to close the question. You don't know what you want to achieve. Analyzing a single image pair should be done manually. If your goal is, however, defining such an algorithm, you should have more data. The simplest algorithm which works only on those specific images is a [lookup table](https://en.wikipedia.org/wiki/Lookup_table), but that's not what you want. – pasbi Jan 23 '20 at 11:10
  • @MarCV: "change in color" is still undefined. – Micka Jan 23 '20 at 11:23
  • 1
    @micka It means that some parts in the first image were in pink and changed into white and vice versa – fracv Jan 23 '20 at 11:45
  • in that case you imho should not go the "compare 2 images" way, but the "extract higher level information from each of the images and compare it". 1. detect the parts. 2. detect the connectivity of the parts. 3. extract and cluster the color of the detected parts.; not an easy task, but that's what computer vision experts studied for and gained knowledge and experience in. – Micka Jan 23 '20 at 11:52
  • @micka Could you explain how to do this, please? knowing that I have tried the feature matching to identify the similarities between the images but still not the desired result. – fracv Jan 28 '20 at 16:01
  • no, someone would have to develop it for your specific/specialized task. I can't tell which methods will work and which won't. – Micka Jan 28 '20 at 16:54

3 Answers3

3

These pictures are different in many ways (deformation, lighting, colors, shape) and simple image processing just cannot handle all of this.

I would recommend a higher level method that tries to extract the geometry and color of those tubes, in the form of a simple geometric graph. Then compare the graphs rather than the images.

enter image description here

I acknowledge that this is easier said than done, and will only work with this particular kind of scene.

1

It is very difficult to help since we don't really know which parameters you can change, like can you keep your camera fixed? Will it always be just about tubes? What about tubes colors?

Nevertheless, I think what you are looking for is a framework for image registration and I propose you to use SimpleElastix. It is mainly used for medical images so you might have to get familiar with the library SimpleITK. What's interesting is that you have a lot of parameters to control the registration. I think that you will have to look into the documentation to find out how to control a specific image frequency, the one that create the waves and deform the images. Hereafter I did not configured it to have enough local distortion, you'll have to find the best trade-off, but I think it should be flexible enough.

Anyway, you can get such result with the following code, I don't know if it helps, I hope so:

import cv2
import numpy as np
import matplotlib.pyplot as plt
import SimpleITK as sitk

fixedImage = sitk.ReadImage('1.jpg', sitk.sitkFloat32)
movingImage = sitk.ReadImage('2.jpg', sitk.sitkFloat32)

elastixImageFilter = sitk.ElastixImageFilter()

affine_registration_parameters = sitk.GetDefaultParameterMap('affine')
affine_registration_parameters["NumberOfResolutions"] = ['6']
affine_registration_parameters["WriteResultImage"] = ['false']
affine_registration_parameters["MaximumNumberOfSamplingAttempts"] = ['4']

parameterMapVector = sitk.VectorOfParameterMap()
parameterMapVector.append(affine_registration_parameters)
parameterMapVector.append(sitk.GetDefaultParameterMap("bspline"))

elastixImageFilter.SetFixedImage(fixedImage)
elastixImageFilter.SetMovingImage(movingImage)
elastixImageFilter.SetParameterMap(parameterMapVector)
elastixImageFilter.Execute()

registeredImage = elastixImageFilter.GetResultImage()
transformParameterMap = elastixImageFilter.GetTransformParameterMap()

resultImage = sitk.Subtract(registeredImage, fixedImage)
resultImageNp = np.sqrt(sitk.GetArrayFromImage(resultImage) ** 2)

cv2.imwrite('gray_1.png', sitk.GetArrayFromImage(fixedImage))
cv2.imwrite('gray_2.png', sitk.GetArrayFromImage(movingImage))
cv2.imwrite('gray_2r.png', sitk.GetArrayFromImage(registeredImage))
cv2.imwrite('gray_diff.png', resultImageNp)

Your first image resized to 256x256:
first image
Your second image:
second image
Your second image registered with the first one:
second image registered
Here is the difference between the first and second image which could show what's different:
difference

87VN0
  • 775
  • 3
  • 10
0

This is one of the classical problems of image treatment - and one which does not have an answer which holds universally. The possible answers depend highly on what type of images you have, and what type of information you want to extract from them and the differences between them.

You can reduce noise by two means: a) take several images of the same object, such that the object does not change. You can stack the images and noise is reduced by square-root of the number of images. b) You can run a blur filter over the image. The more you blur, the more noise is averaged. Noise is here reduced by square-root of the number of pixels you average over. But so is detail in the images.

In both cases (a) and (b) you run the difference analysis after you applied either method.

Probably not applicable to you as you likely cannot get hold of either: it helps, if you can get hold of flatfields which give the inhomogeneity of illumination and pixel sensitivity of your camera and allow correcting the images prior to any treatment. Similar goes for darkfields which give an estimate of the influence of the read-out noise of the camera and allow correcting images for those.

There is somewhat another 3rd option, which is more high-level: run your object analysis first at a detailed-enough level. And compare the results.

planetmaker
  • 5,884
  • 3
  • 28
  • 37