3

I have 7 images from gopro (5 cameras in rig and one for top and one for bottom, They all are gopro camera). I want to stitch all these images together to create a 3d panorama. I have been able to stitch 5 images in Rig by using opencv stitching_detailed.cpp. Link to file:

https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/stitching_detailed.cpp

But I'm not sure how to stitch top and bottom (For me right now bottom is not that important but i have to do something about top). Any idea how this can be done? Please let me know if i can use same stitching_detailed.cpp to stitch top also.

Following Link contains images which I'm using. It also contains results i got from stitching images in rig.

https://drive.google.com/folderview?id=0B_Bl8s2ePunQcnBaM3A4WDlDcXM&usp=sharing

Ahmad.Masood
  • 1,289
  • 3
  • 21
  • 40
  • 1
    This stithcing code stitches all the images(horizontal and vertical), if enough features are there. May be the top and bottom images dont have much overlapping area and features to get stitched.Can you give my image you are using? – Garvita Tiwari Sep 01 '16 at 12:34
  • I have added link to Images which I'm using. – Ahmad.Masood Sep 01 '16 at 13:11
  • Actually i have to do this programmatically. I have to stitch videos together in real time. This is just one frame from the video. – Ahmad.Masood Sep 01 '16 at 23:56

1 Answers1

2

So first you need to understand how stitching_detailed.cpp works. 1. features keypoint is detected in each image using SURF/ORB/SIFT or so. Then for each image pair, best feature matches are found and homography matrix is calculated and number of inliers for each pair is obtained

(*finder)(img, features[i]); 


 BestOf2NearestMatcher matcher(try_cuda, match_conf);
 matcher(features, pairwise_matches);
  1. All these pairs are pased into leaveBiggestComponent to obtain largest set of images, which belong to a panorama.

  2. camera parameters or each image is calculated from above obtained set and warping and blending is done.

step 1 will find homography for each pair and generate number of inliers. Step 2 will remove all those image pair for which confidence factor(number of inliers) is less than threshold. Since cam7 img has so less features and almost no overlapping region with any other image, it will get rejected in leavebiggestcomponent step.

You can see features and mtaching in this link (I have used orbfeatures)

https://drive.google.com/open?id=0B2wDitsftUG9QnhCWFIybENkbDA

Also I have not changed the image size, but i guess reducing the image size a bit(by half maybe) will yield more feature points

What you can do is reduce the time interval in which you take frame for stitching. To obtain good results, there must be atleast 40% overlapping region between images.

Garvita Tiwari
  • 584
  • 2
  • 12
  • Thanks a lot for this detailed answer. Actually I used PRO7 Bullet360 to take these image. Link: http://shop.360rize.com/PRO7-360-VIDEO-VR-p/pro7e.htm. There is one thing which i wanted to ask that If I want to decrease this threshold value so to do some sort of forced stitching so that image 7 is not dropped, Is there a way of doing that ? – Ahmad.Masood Sep 02 '16 at 08:25
  • One more thing Are you suggesting that this apparatus will not work in this type of closed environment ? – Ahmad.Masood Sep 02 '16 at 08:39
  • you can change the threshold values float match_conf = 0.3f; and float conf_thresh = 1.f; but it moght produce some error – Garvita Tiwari Sep 02 '16 at 10:16
  • If you know the orientation of each camera, then you can directly feed the rotation matrix of camera to cameras.R of above code. – Garvita Tiwari Sep 02 '16 at 10:17