I have two images and would like to perform feature detection on both and match these features. My problem is that the second image is a section of the first image with missing pixels. These missing pixels cause a strong discontinuity in the pixel intensity causing the feature detectors to place all features at this boundary as such:
Because of this the feature matching program fails since (i think) the descriptor of these features contain the missing pixel intensities which don't exist in the original image. As such i would like the feature detector to exclude these features and instead search within the 'valid' pixel regions. Does anyone have an idea ?
Else how, maybe using pattern matching on the pixel intensity could be a strong alternative but i can't find an efficient implementation for this (especially considering that the two images may be rotated with respect to one another).
[EDIT] Here are the two images: