2

In my current project, a drone is being used to capture the images of facade of buildings. The buildings are wide and tall and drone is constrained in a space that it cannot be at a greater distance so that all of the building is in its FOV. Currently, what I have is these facade images which drone took while surveying the building.

Since the drone moves significantly every time to capture different region than previous image, the optical center of the camera moves. I tried different panorama pipelines including opencv stitcher class and all of them fail for complete dataset. On subsets of images, like the images of building taken in one vertical pass from ground to roof, panorama pipelines are able to create a stitched image but it has misregistrations which are halting my next image processing operations.

I wanted to know is there a way/approximation to stitch panorama without misregistrations even when the optical center has been moved significantly in each image?

Edit: Added picture for displaying the kind of misregistrations I get.

Panorama - building 1 - AutoPano Paper Implementation

Edit 2: Added couple of images from dataset for more clarification.

Building - 1

Building - 1-2

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
  • Can you explain typical kind of errors? Why are there misregistrations? Are there other kind of errors? – Micka Aug 18 '21 at 05:27
  • Misregistrations are in the incorrect overlaps of windows, wall designs, balconies and in general. It is not specific to any one thing. On some building images, panorama is created but has these misregistrations while on others it fails altogether. – Sarvesh Thakur Aug 18 '21 at 06:16
  • 3
    thank you for the image. I see 2 problems: 1. it looks like a wide angle or fisheye camera is used. You should undistort it, otherwise you will not even a good panorama for planar scenes 2. stitching perfectly looking panoramas (not even talking about correct panormas) from 3D scenes is very hard, because the 2D image techniques used in stitching, like homograpghies etc. are only correct for planar scenes or purely rotating camera centers (and only for pinhole cameras, see 1.) and 3D occlusions will appear. Maybe think about 3D reconstruction like SLAM + texturing. – Micka Aug 18 '21 at 07:24
  • Thanks @Micka. I was thinking the same. I actually used openMVG for generating 3d structure but the point cloud was very sparse. Also, my current dataset is not well taken for an SFM pipeline. Most of the regions are covered only twice at max with little overlaps. Since the buildings have less texture and repetitive structure, I am not sure how good the reconstruction can be. Can you point out some good practices for 3d reconstruction of such large structures? Also, you mentioned adding texturing over the 3d reconstruction, how can that be done? Any example which I can follow? Thank you:) – Sarvesh Thakur Aug 18 '21 at 18:51
  • 1
    @Micka I forgot to mention that its not a fisheye lens. The above panorama was created by a pipeline based on this paper: http://matthewalunbrown.com/papers/ijcv2007.pdf. In section 5, you will see an automatic straightening of panorama is done, this is actually making my panorama kind of look like taken from a fishyeye. When I used opencv stitcher class, there was no distortion of the scene such as in the image. – Sarvesh Thakur Aug 18 '21 at 22:33

0 Answers0