1

I have two images and would like to perform feature detection on both and match these features. My problem is that the second image is a section of the first image with missing pixels. These missing pixels cause a strong discontinuity in the pixel intensity causing the feature detectors to place all features at this boundary as such:

enter image description here

Because of this the feature matching program fails since (i think) the descriptor of these features contain the missing pixel intensities which don't exist in the original image. As such i would like the feature detector to exclude these features and instead search within the 'valid' pixel regions. Does anyone have an idea ?

Else how, maybe using pattern matching on the pixel intensity could be a strong alternative but i can't find an efficient implementation for this (especially considering that the two images may be rotated with respect to one another).

[EDIT] Here are the two images:

enter image description here Original image

Guillaume
  • 35
  • 5
  • 2
    Do you have the 2 starting images as well please? – Mark Setchell Nov 19 '18 at 10:25
  • I just added them in an edit. – Guillaume Nov 19 '18 at 12:38
  • You can't fool me that easily! That's **one** image with **no** transparent pixels... – Mark Setchell Nov 19 '18 at 13:04
  • I tried... here are the actual images – Guillaume Nov 19 '18 at 13:38
  • What do you mean by "feature matching program". Maybe you can customize it to reject the features on the boundary and not the weaker ones in the valid area. – Knipser Nov 19 '18 at 13:54
  • Basically i call a feature detection method (such as detectMinEigenFeatures) and want a way to let it know to exclude all features that include 'missing pixels'. I'm not sure if this is even possible... For the moment my current solution is to create as many Regions of Interest (ROI) as possible within the 'valid pixel region' which is an optional argument when calling the detector (meaning many calls to the detector). For now this gives me satisfactory results but i was hoping for a better solution since the ROIs can only be rectangles meaning some parts are not analysed... – Guillaume Nov 20 '18 at 10:20

1 Answers1

1

If you slide the "holey" image over the solid one, and difference them, they will be aligned when you have the maximum number of black pixels. Watch for the magenta diagonal to disappear.

enter image description here

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • Thank you for your response. I like the idea, sounds very practical in this scenario as the reference image is of the same height which is usually not the case... Additionally I would need my algorithm to be invariant to rotation. I will try to implement your idea on a non-ideal case (where the height and relative rotation are different) but i fear the efficiency and computation time will take a hit. – Guillaume Nov 20 '18 at 14:33