I have an image of a roof which covers a huge area! I am analyzing it for faults using segmentation. However the challenge is that sometimes the single shot of the roof doesn't have the details involved so its hard to analyse that. So we split it up into parts and take images of smaller parts and analyse that. Here is an example:
So Now I want to analyse the parts for faults and replace it on the single shot. Here is how one of the analysed part looks:
My thinking so far: If I can map the non analysed part to the roof in some way, I can use the same information and method to map the analysed part to the single shot too.
I have been able to do feature detection and identify the parts on the single shot however I am not sure as to how I should transform them to fit and replace the exact part on the single shot. Here is the feature detection:
Feature detection for image/template matching
I have been thinking of trying to stitch the smaller images to the larger one sequentially one after the other but I am concerned that might not work
PS: I am using Linux, Python3 and OpenCV 3.4.0
Also, the original single shot is not manually divided into the different parts. Since its a thermal cam, its not possible to have high res on the single shot so the cam takes different shots close up. Those parts are separate images and not cropped from the single shot.
EDIT: I am now trying an alternative approach where I detect faults in the part images and then crop those faults out to use as templates and then try feature detection using SIFT to match the fault as a template on the single shot. However, small faults are being matched to every thing other than itself :( Most of the larger faults are detected fine and medium ones dont do too good either. I think the reason is that the image is low res and the single shot pic doesnt replicate the faults on the closer shots and the number of features is also less on the single shot for template matching.