In my current project, a drone is being used to capture the images of facade of buildings. The buildings are wide and tall and drone is constrained in a space that it cannot be at a greater distance so that all of the building is in its FOV. Currently, what I have is these facade images which drone took while surveying the building.
Since the drone moves significantly every time to capture different region than previous image, the optical center of the camera moves. I tried different panorama pipelines including opencv stitcher class and all of them fail for complete dataset. On subsets of images, like the images of building taken in one vertical pass from ground to roof, panorama pipelines are able to create a stitched image but it has misregistrations which are halting my next image processing operations.
I wanted to know is there a way/approximation to stitch panorama without misregistrations even when the optical center has been moved significantly in each image?
Edit: Added picture for displaying the kind of misregistrations I get.
Edit 2: Added couple of images from dataset for more clarification.