2

I got output like below after stitching result of 24 stitched images to next 25th image. Before that stitching was good.

enter image description here

Is anyone aware of why/when output of stitching comes like this? What are the possibilities of output coming like that? What may be the reason of that?

Stitching code is following standard stitching steps like finding keypoints, descriptors then matching points, calculating homography and then warping of images. But I am not understanding why that output is coming.

Core part of stitching is like below:

detector = cv2.SIFT_create(400)
# find the keypoints and descriptors with SIFT
gray1 = cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)
ret1, mask1 = cv2.threshold(gray1,1,255,cv2.THRESH_BINARY)
kp1, descriptors1 = detector.detectAndCompute(gray1,mask1)

gray2 = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
ret2, mask2 = cv2.threshold(gray2,1,255,cv2.THRESH_BINARY)
kp2, descriptors2 = detector.detectAndCompute(gray2,mask2)

keypoints1Im = cv2.drawKeypoints(image1, kp1, outImage = cv2.DRAW_MATCHES_FLAGS_DEFAULT, color=(0,0,255))
keypoints2Im = cv2.drawKeypoints(image2, kp2, outImage = cv2.DRAW_MATCHES_FLAGS_DEFAULT, color=(0,0,255))

# BFMatcher with default params
matcher = cv2.BFMatcher()
matches = matcher.knnMatch(descriptors2,descriptors1, k=2)

# Apply ratio test
good = []
for m, n in matches:
    if m.distance < 0.75 * n.distance:
        good.append(m)

print (str(len(good)) + " Matches were Found")

if len(good) <= 10:
    return image1

matches = copy.copy(good)

matchDrawing = util.drawMatches(gray2,kp2,gray1,kp1,matches)

#Aligning the images
src_pts = np.float32([ kp2[m.queryIdx].pt for m in matches ]).reshape(-1,1,2)
dst_pts = np.float32([ kp1[m.trainIdx].pt for m in matches ]).reshape(-1,1,2)


H = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)[0]

h1,w1 = image1.shape[:2]
h2,w2 = image2.shape[:2]
pts1 = np.float32([[0,0],[0,h1],[w1,h1],[w1,0]]).reshape(-1,1,2)
pts2 = np.float32([[0,0],[0,h2],[w2,h2],[w2,0]]).reshape(-1,1,2)
pts2_ = cv2.perspectiveTransform(pts2, H)
pts = np.concatenate((pts1, pts2_), axis=0)
# print("pts:", pts)
[xmin, ymin] = np.int32(pts.min(axis=0).ravel() - 0.5)
[xmax, ymax] = np.int32(pts.max(axis=0).ravel() + 0.5)
t = [-xmin,-ymin]
Ht = np.array([[1,0,t[0]],[0,1,t[1]],[0,0,1]]) # translate

result = cv2.warpPerspective(image2, Ht.dot(H), (xmax-xmin, ymax-ymin))

resizedB = np.zeros((result.shape[0], result.shape[1], 3), np.uint8)

resizedB[t[1]:t[1]+h1,t[0]:w1+t[0]] = image1
# Now create a mask of logo and create its inverse mask also
img2gray = cv2.cvtColor(result,cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 0, 255, cv2.THRESH_BINARY)

kernel = np.ones((5,5),np.uint8)
k1 = (kernel == 1).astype('uint8')
mask = cv2.erode(mask, k1, borderType=cv2.BORDER_CONSTANT)

mask_inv = cv2.bitwise_not(mask)

difference = cv2.bitwise_or(resizedB, resizedB, mask=mask_inv)

result2 = cv2.bitwise_and(result, result, mask=mask)

result = cv2.add(result2, difference)

Edit:

This image shows match drawing while stitching 25 to result until 24 images:
enter image description here

And before that match drawing:
enter image description here

I have total 97 images to stitch. If I stitch 24 and 25 image separately they stitches properly. If I start stitching from 23rd image onwards then also stitching is good but it gives me problem when I stitches starting from 1st image. I am not able to understand the problem.

Result after stitching 23rd image:
enter image description here

Result after stitching 24th image:
enter image description here

Result after stitching 25th image is as above which went wrong.

Strange Observation: If I stitch 23,24,25 images seperately with same code it gets stitches. If I stitch images after 23 till 97 , it gets stitches. But somehow if I stitch images from 1st, it breaks while stitching 25th image. I am not understanding why this happens.

I have tried different combination like different keypoint detection, extraction methods, matching methods, different homography calculations, different warping code but those combinations didn't work. Something is missing or wrong in the steps combination code. I am not able to figure it out.

Sorry for this long question. As I am completely new to this I am not able to explain and get the things properly. Thanks for your help and guidance.

Stitched result of 23,24,25 images separately with SAME code:
enter image description here

With different code (gives black lines in between stitching), if I stitched 97 images then 25th goes up in stitching and stitches as shown below (right corner point):
enter image description here

ganesh
  • 416
  • 1
  • 11
  • 32
  • 1
    Possibly the perspective warp of the last image is faulty perhaps due to bad match points. – fmw42 Jul 27 '21 at 15:29
  • @fmw42 What should I do in that case? Is something wrong in script like ordering of steps execution etc? I have tried different combinations of matching methods, detecting keypoints methods, different homography calulations, different wraping code but nothing worked for me. – ganesh Jul 27 '21 at 15:45
  • Look at the key point matches for the one offending image and also warp it alone and see if you get the same effect. – fmw42 Jul 27 '21 at 15:46
  • Perhaps try a different image in that location and see if the same thing happens. – fmw42 Jul 27 '21 at 15:47
  • @fmw42 please check my edit. I have tried to skip 25 image but that gives me stitching error. I think sequencing or matching not found that's why. – ganesh Jul 27 '21 at 16:00
  • 1
    you are accumulating errors. the warp gets progressively worse. look at the right end of your "stitching" chain. that doesn't look like a proper topdown view. it's distorted heavily. in general, image stitching is a lot more complex than whatever "tutorial" seems to be out there that shows everyone to do *this*... *this* will always eventually fail, exactly like this. OpenCV has a whole stitching module. you should use it, or consider taking a course on the topic, or read books/other publications. – Christoph Rackwitz Jul 27 '21 at 18:10
  • 2
    I would rather stitch pairs of adjacent images and than repeat. But you can still get into troubles with excessive distortion with so many images. – ffsedd Jul 29 '21 at 19:10
  • 1
    I would use pairwise stitching. Have a look at these papers from LFB institute: https://www.lfb.rwth-aachen.de/bibtexupload/pdf/BEH11g.pdf https://www.lfb.rwth-aachen.de/bibtexupload/pdf/BEH11a.pdf https://www.lfb.rwth-aachen.de/bibtexupload/pdf/BEH10g.pdf and https://www.lfb.rwth-aachen.de/files/publications/2010/BEH10a.pdf if you dont need high speed performance, you could have a look at bundle adjustment but still compute features not on the mosaic but on the individual images – Micka Jul 30 '21 at 11:26
  • can you show the previous last mosaic image and the input image without matchings? For me it looks like the new image hasnt enough overlap. You see the some T junction road in the middle of the image but nothing like that at the border of the previous image? – Micka Jul 30 '21 at 13:48
  • if you just want to detect that the matching failed I would suggest after computing the homography to test how much the image will will increas after warping. If it is increased or decreased too much, you can handle that as a failed matching. – Micka Jul 30 '21 at 13:51
  • thx for adding additional images. Can you tell whether 24th image was stitched correctly? It looks like nearly 99% overlap to image 23rd image? And the 25th new image has the T junction in it, so there is some gap between 24th image and 25th image (not so much overlap)? – Micka Jul 30 '21 at 14:35
  • @Micka I have provided previous results in question. I am not a pro in this so not able to understand comments properly in detail. But I am trying, struggling a lot in this to achieve perfect stitching. Input images are big so not able to provide here. If you could be able to provide your contact (email etc) I will provide you that. Thanks – ganesh Jul 30 '21 at 14:36
  • can you downscale single images 23, 24, 25 and add them? I would like to get an impression how much overlap each image pair has – Micka Jul 30 '21 at 14:38
  • @Micka added inputs. please check – ganesh Jul 30 '21 at 14:52
  • You could also try the higher level OpenCV's Stitcher API [https://docs.opencv.org/4.5.2/d8/d19/tutorial_stitcher.html][here]. It might generate better results as it will handle most of the checks and adjustments. – matheubv Jul 30 '21 at 14:54
  • @matheubv It will give me panorama result. I need orthomosaic result rather than panorama – ganesh Jul 30 '21 at 14:59
  • I dont understand how image 24 and 25 should be stitched, are they from the same road? Trees and houses look quite different to me. – Micka Jul 30 '21 at 15:03
  • ok, now I see. It's only 10-20% overlap between 24 and 25, that's not enough for a robust and stable matching. In the case of stitching them to the 1..24 mosaic there are too many wrong-matchee features in the rest of the image. You could use a RANSAC based matching+transformation estimation and early-cancel ill-posed transformations (e.g. from a size-distortion-test) but that's too complex for a stackoverflow question ... – Micka Jul 30 '21 at 16:03
  • @ganesh Can you share all of your initial 25 images until this problem is coming so that we can give it a try. – Rahul Kedia Aug 02 '21 at 14:02
  • @Rahul Kedia shared. pls check – ganesh Aug 02 '21 at 14:34
  • 1
    You've already gotten a comment to do pair-wise matching. Here's another reason as to why you **must** use pairwise matching and not do a matching to the current stitched result: the current stitched result grows for every image you add to it. Every subsequent image you want to match becomes a more complex problem. You made an O(n^2) program, whereas pairwise matching is an O(n) problem. – Cris Luengo Aug 02 '21 at 14:56

1 Answers1

7

Firstly, I was not able to recreate your problem and solve it as the images were too big for my system to process. However, I had faced the same problem in my Panorama Stitching project, so I am sharing the reason behind it and my approach to solving my problem. Hope this helps you too.

Here's what my problem looked like when I stitched 4 images together just like you did.

My problem

As you can see, the 4th image was getting distorted a lot which must not happen. The same thing happened with you but on a greater level.

Now, here's the output when I stitched 8 images after some image pre-processing.

Output after image pre-processing

After some pre-processing on the input images, I was able to stitch 8 images together perfectly without any distortion.

To understand the exact reason behind this kind of distortion, watch this video by Joseph Redmon between 50:26 - 1:07:23.

As suggested in the video, we'll first have to project the images onto a cylinder and then unroll them and then stitch these unrolled images together.

Below is the initial input image(left) and the image after projection and unrolling onto a cylinder(right).

Image before and after pre-processing

For your problem, as you are using satellite images, I guess projection onto a sphere would work better than the cylinder however you'll have to give it a try.

Sharing below my code for projecting the image onto a cylinder and unrolling it for reference. The mathematics used behind it is the same as given in the video.


def Convert_xy(x, y):
    global center, f

    xt = ( f * np.tan( (x - center[0]) / f ) ) + center[0]
    yt = ( (y - center[1]) / np.cos( (x - center[0]) / f ) ) + center[1]
    
    return xt, yt


def ProjectOntoCylinder(InitialImage):
    global w, h, center, f
    h, w = InitialImage.shape[:2]
    center = [w // 2, h // 2]
    f = 1100       # 1100 field; 1000 Sun; 1500 Rainier; 1050 Helens
    
    # Creating a blank transformed image
    TransformedImage = np.zeros(InitialImage.shape, dtype=np.uint8)
    
    # Storing all coordinates of the transformed image in 2 arrays (x and y coordinates)
    AllCoordinates_of_ti =  np.array([np.array([i, j]) for i in range(w) for j in range(h)])
    ti_x = AllCoordinates_of_ti[:, 0]
    ti_y = AllCoordinates_of_ti[:, 1]
    
    # Finding corresponding coordinates of the transformed image in the initial image
    ii_x, ii_y = Convert_xy(ti_x, ti_y)

    # Rounding off the coordinate values to get exact pixel values (top-left corner)
    ii_tl_x = ii_x.astype(int)
    ii_tl_y = ii_y.astype(int)

    # Finding transformed image points whose corresponding 
    # initial image points lies inside the initial image
    GoodIndices = (ii_tl_x >= 0) * (ii_tl_x <= (w-2)) * \
                  (ii_tl_y >= 0) * (ii_tl_y <= (h-2))

    # Removing all the outside points from everywhere
    ti_x = ti_x[GoodIndices]
    ti_y = ti_y[GoodIndices]
    
    ii_x = ii_x[GoodIndices]
    ii_y = ii_y[GoodIndices]

    ii_tl_x = ii_tl_x[GoodIndices]
    ii_tl_y = ii_tl_y[GoodIndices]

    # Bilinear interpolation
    dx = ii_x - ii_tl_x
    dy = ii_y - ii_tl_y

    weight_tl = (1.0 - dx) * (1.0 - dy)
    weight_tr = (dx)       * (1.0 - dy)
    weight_bl = (1.0 - dx) * (dy)
    weight_br = (dx)       * (dy)
    
    TransformedImage[ti_y, ti_x, :] = ( weight_tl[:, None] * InitialImage[ii_tl_y,     ii_tl_x,     :] ) + \
                                      ( weight_tr[:, None] * InitialImage[ii_tl_y,     ii_tl_x + 1, :] ) + \
                                      ( weight_bl[:, None] * InitialImage[ii_tl_y + 1, ii_tl_x,     :] ) + \
                                      ( weight_br[:, None] * InitialImage[ii_tl_y + 1, ii_tl_x + 1, :] )


    # Getting x coorinate to remove black region from right and left in the transformed image
    min_x = min(ti_x)

    # Cropping out the black region from both sides (using symmetricity)
    TransformedImage = TransformedImage[:, min_x : -min_x, :]

    return TransformedImage, ti_x-min_x, ti_y

You just have to call the function ProjectOntoCylinder and pass it an image to get the resultant image and the coordinates of white pixels in the mask image. Use the code below to call this function and get the mask image.

# Applying Cylindrical projection on Image
Image_Cyl, mask_x, mask_y = ProjectOntoCylinder(Image)

# Getting Image Mask
Image_Mask = np.zeros(Image_Cyl.shape, dtype=np.uint8)
Image_Mask[mask_y, mask_x, :] = 255

Here are links to my project and its detailed documentation for reference:

Part 1: Source Code, Documentation

Part 2: Source Code, Documentation

Rahul Kedia
  • 1,400
  • 6
  • 18
  • Thanks for the answer with detailed explanation. I need orthomosaic result rather than panorama. Still I was trying your code to my inputs( after resizing) but system gets hanged somehow. – ganesh Aug 04 '21 at 14:08
  • I can't tell why it hanged by my code takes a lot of time so I guess you'll just have to wait. – Rahul Kedia Aug 05 '21 at 09:48