0

I am working on a project which requires me to stitch images together. I decided to test this with buildings due to a large number of possible key points that can be calculated. I have been following several guides, but the one with the best results for 2-3 images has been this guide: https://towardsdatascience.com/image-stitching-using-opencv-817779c86a83. The way I decided to stitch multiple images is to stitch the first two, then take the output and then stitch that with the third image, so on and so forth. I am confident in the matching of descriptors for the images. But as I stitch more and more images, the previous stitched part gets pushed further and further into -z axis. Meaning they get distorted and smaller. The code I use to accomplish this is as follows:

import cv2
import numpy as np
import os

os.chdir('images')
img_ = cv2.imread('Output.jpg', cv2.COLOR_BGR2GRAY)
img = cv2.imread('DJI_0019.jpg', cv2.COLOR_BGR2GRAY)

#Setting up orb key point detector
orb = cv2.ORB_create()

#using orb to compute keypoints and descriptors
kp, des = orb.detectAndCompute(img_, None)
kp2, des2 = orb.detectAndCompute(img, None)
print(len(kp))

#Setting up BFmatcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des, des2, k=2) #Find 2 best matches for each descriptors (This is required for ratio test?)

#Using lowes ratio test as suggested in paper at .7-.8
good = []
for m in matches:
    if m[0].distance < .8 * m[1].distance:
        good.append(m)
matches = np.asarray(good) #matches is essentially a list of matching descriptors

#Aligning the images
if(len(matches)) >= 4:
    src = np.float32([kp[m.queryIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
    dst = np.float32([kp2[m.trainIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)

    #Creating the homography and mask
    H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
    print(H)
else:
    print("Could not find 4 good matches to find homography")

dst = cv2.warpPerspective(img_, H, (img.shape[1] + 900, img.shape[0]))
dst[0:img.shape[0], 0:img.shape[1]] = img
cv2.imwrite("Output.jpg", dst)

With the output of the 4th+ stitch looking like such: enter image description here

As you can see the images are getting further and further transformed in a weird way. My theory for such an event happening is due to the camera position and angle at which the images were taken, but I am not sure. If this might be the case, are there optimal parameters that will produce the best images to stitching?

Is there a way to fix this issue where the content can be pushed "flush" against the x axis?

Edit: Adding source images: https://i.stack.imgur.com/LtzB5.jpg

Sai Peri
  • 339
  • 1
  • 3
  • 17
  • What do the source images look like? Are they curved? This could be due to the use of a wide angle lens at a tilted view producing barrel distortion. – fmw42 Aug 01 '19 at 17:56
  • I added the source images in a link below. I figured it might have been to the camera lens but I wasn't sure. Could you confirm that this might be the issue? – Sai Peri Aug 01 '19 at 18:03
  • Curvature as in your first input image would appear to me to be due to barrel distortion from a wide angle lens. You might try correcting the input images for barrel distortion before trying to match and stitch. See for example https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html – fmw42 Aug 01 '19 at 20:38

0 Answers0