0

I am writing a keypoint description procedure using opencv and python.The following picture represent a window that covers a keypoint. The features are aggregated from the pixels that this window covers.

enter image description here

The issue I am having is with pixels that are near the edges. Is there any good way to handle the keypoint description for keypoint that are near the limits (edges) of the image. I couldn't find how this case is handled by famous feature extractors such as SIFT or SURF

PS: I am using a window of 16x16 pixels

Lilo
  • 640
  • 1
  • 9
  • 22
  • I think that they are just usually ignored. If you don't have enough information to compute a full descriptor for them I'm not sure that you can compute a meaningful score when matching them. – Ash Oct 10 '18 at 11:06
  • 1
    typically those keypoints are unusable and are or should be discarded. If you WANT to use them, just create an implicit oe explicit image border by standard border techniques like setting to 0, mirroring or repetition – Micka Oct 10 '18 at 11:36
  • mirroring average pixels intensities or the most common one ?? do you have a reference for how SIFt or SURF manages this case ?? – Lilo Oct 10 '18 at 12:02
  • You just add a mirrored version of your image to the left top bottom and right. So if one row of your image was [0, 1, 2 ,3], mirrored it would look: [3, 2, 1, 0, 1, 2, 3, 2, 1, 0)]. That is a common technique in image processing, but in your case I would say that adding 0s instead of mirroring is better, because it describes the situation better. – Dmitrii Z. Oct 10 '18 at 13:09
  • I agree with the other comments about some of the ways you could do it. My two cents is that you should ignore these cases because mirroring, repetition, etc... may only make the feature descriptor more likely to result in a bad match (since those techniques aren't actually giving you back information that matches the real scene). – Jomnipotent17 Oct 11 '18 at 22:47

0 Answers0