-1

I have this image:

enter image description here

I am trying to put the background into focus in order to perform edge-detection on the image. What would be the methods available to me (either on space/frequency filed)?


What I tried is the following:

kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
im = cv2.filter2D(equ, -1, kernel)

This outputs this image:

enter image description here

I also played around with the centre value but with no positive result.

I also tried this:

psf = np.ones((5, 5)) / 25
equ = convolve2d(equ, psf, 'same')
deconvolved = restoration.wiener(equ, psf, 1, clip=False)
plt.imshow(deconvolved, cmap='gray')

With no appreciable changes to the image.

Any help on the matter is greatly appreciated!


EDIT:

Here is the code that I took from here:

psf = np.ones((5, 5)) / 25
equ = convolve2d(equ, psf, 'same')
deconvolved, _ = restoration.unsupervised_wiener(equ, psf)
plt.imshow(deconvolved, cmap='gray')

and here is the output:

enter image description here

Stefano Pozzi
  • 529
  • 2
  • 10
  • 26
  • 5
    If you want to put the background into focus, the ___only___ way is to take a photograph with the background in focus. All those TV programmes where they digitally enhance to extract clear details of licence plates, faces, etc. from blurry or low-resolution images... can't actually be done. You can interpolate details and create a best-fit solution but you can't add detail that isn't there to begin with. – Mick Jul 13 '18 at 08:14
  • 1
    "This outputs this image" - and it indeed is less blurry, so Mission Accomplished. There is no way for the computer to magically add details that aren't in the original image. – Jongware Jul 13 '18 at 08:19
  • The thing is that I have it as an assignment from university... this is my task `De-blurring (de-noising) of the image by application of a suitable filter (either on space/frequency filed) and experiment with different choices and provide comments.` – Stefano Pozzi Jul 13 '18 at 08:38
  • 1
    @usr2564301 Indeed you are right but as a next step I need to perform edge detection and the less blurry image, to me, looks like it would perform worse than the original one – Stefano Pozzi Jul 13 '18 at 08:55
  • 2
    Ah – you should [edit] your question and add the *reason* to it. I don't mean the part about this being a university assignment – we don't care – but because you need it as input for the next step, edge detection. – Jongware Jul 13 '18 at 08:59
  • you could try to resize the image multiple times. Blurred objects should be "sharp" in lower scales (but obviously with low detail). Just make sure that no additional blurring is added during creation of your "scale space pyramid" – Micka Jul 13 '18 at 09:46
  • @Micka: you are right, but in this case, this image is so blurred that even a reduction by a linear factor 8 doesn't yield good edges (and the image is microscopic). And for reduction, some lowpass filtering must be applied anyway, because the image is noisy. –  Jul 18 '18 at 08:14

1 Answers1

3

Deblurring images is (unfortunately) quite difficult, the reason for this is that blurring removes noise, so there are several (noisy) images that will yield the same image when you blur it. This means that there is no simple way for the computer to "choose" which of the noisy images when you deblur it. Because of this, deblurring will often yield noisy images.

Now then, you might ask how photographers do this in reality. Well, they do not actually deblur images, they sharpen them (which is slightly different). When you sharpen an image, you increase the contrast near borders to emphasise them (this is why you sometimes see a halo around borders on images that have been too heavily sharpened).

In you case, you want to deblur it (and there is no convolution kernel that will allow you to do this). To do it in a good way, you need to know what process blurred the image in the first place (that is if you don't want to spend thousands of dollars on special software or don't have a masters in mathematics or astronomy).

If you still want to do this, I'd recommend searching for deconvolution, and if you don't know the blurring process, blind deconvolution. There are some (crude) functions for it in skimage, which might be of help (http://scikit-image.org/docs/stable/auto_examples/filters/plot_restoration.html#sphx-glr-auto-examples-filters-plot-restoration-py).

Finally, the final link in Jax Briggs seem helpful, but I would not cross my fingers for magical results.

Yngve Moe
  • 1,057
  • 7
  • 17
  • Thx for the answer! I already looked into that but I was getting weird white images (some black pixels) as a result. I'll post the code and resulting image as an edit – Stefano Pozzi Jul 13 '18 at 09:29