2

I couldn't find the proper answer to my problem on the Web, so I'll ask it here. Let's say we're given two 2D photos of the same place taken from slightly different angles. I've chosen the set of points (edge detection), found correspondences between them (which point is which on other photo). Now I need to somehow find out world coordinates of these points in 3D.

For the last 5 hours I've read a lot about it but I still can't understand what steps should I follow. I've tried to estimate motion of a camera using the function recoverPose applied to an essential matrix and two sets of points on each frame. I can't understand what it gives me when I know rotation and translation matrices (thatrecoverPose returned). What should I do in order to achieve my goal?

I also know the calibration matrix of my camera (I use KITTI dataset). I've read opencv documentation but still don't understand. It's monocular vision.

AnatoliySultanov
  • 239
  • 4
  • 14
  • Do you have the calibratiion of your camera — or your **cameras**? That is, did you get your pair of images from the same camera by slightly moving it, or from a couple of preliminary calibrated cameras? – kazarey Jul 10 '17 at 17:26
  • @kazarey I have calibration of my camera. It's the same camera. – AnatoliySultanov Jul 10 '17 at 17:27
  • 1
    your opencv documentation link, is from several years ago, the current webpage of opencv is opencv.org ... leaving that aside, You should look to the function [triangulatePoints](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#triangulatepoints) of opencv – api55 Jul 10 '17 at 18:08

0 Answers0