0

I am currently trying to stitch 4 photographs of a document together. I am using OpenCV 2.9, SURF extractor and descriptor with an hessian threshold of 2000 (otherwise there are too many keypoints). I am rejecting any matches with a distance greater than (2*min_dist) and merge the images using findHomography() (RANSAC method) and warpPerspective().

This is how good it gets at the moment. https://i.stack.imgur.com/aVWWu.jpg

How come the transformation matrix findHomography() is calculating does not seem to be accurate? The matches look fairly good to me: https://i.stack.imgur.com/F8ZNc.jpg

Any ideas how to improve the result? Any preprocessing like binarization, denoising, autocontrasting etc. pp. had no impact at all or made it even worse.

J W
  • 9
  • 3
  • Use camera calibration to remove lens distortion and make sure that your document is fully planar ("glue" it on the desk), otherwise a computed homography is the wront assumption, which might not be so wrong for many applications, but if you need best precision it's a factor. – Micka Dec 17 '14 at 09:29
  • 2
    one idea of improvement could be to first compute the homography like you did. Then split your images in smaller overlapping parts and compute the homographies only locally. Then use bundle adjustment to stitch all together. Not sure whether this was done before or whether this is open research :) – Micka Dec 17 '14 at 09:31
  • I agree with Micka, split into cells then look for homographies in the corresponding cells. Another idea is to increase the number of detected keypoints. – Y.AL Dec 17 '14 at 10:36
  • can you please explain in more detail how those smaller parts will look and how to perform a bundle adjustment? – J W Dec 17 '14 at 12:43
  • You can try MLESAC instead of RANSAC. You can also use registration method, like one described in "Generalized Dual Bootstrap-ICP Algorithm", author also provides binaries (http://www.vision.cs.rpi.edu/gdbicp/exec/). – old-ufo Dec 25 '14 at 09:41

0 Answers0