0

Given a robust sparse point cloud, and a set of views (cameras) of this point cloud, how do I determine the world position and orientation of the cameras using OpenCV?

Note that I have the intrinsic parameters of each camera (they are identical), and the point cloud points are defined as 3d world coordinates.

MM.
  • 4,224
  • 5
  • 37
  • 74
  • what do you mean by a point cloud to be a 4x4 matrix? shouldn't a point cloud be a set of 3D (or extended 3D) points? Do you have corresponding points in 2D camera images with known 3D points? – Micka Aug 10 '15 at 15:49
  • You're right. My wires were crossed with camera extrinsic matrices. I have updated the body. – MM. Aug 10 '15 at 15:56
  • do you have 2D point correspondences (matchings) of your 3D points between your `set of views` and the camera from which you want to extract the pose? – Micka Aug 10 '15 at 16:03
  • I have both. I have tried using solvePnPRansac, but get results that fluctuate wildly in world position. – MM. Aug 10 '15 at 16:06
  • did you consider lens distortion? can you post sample images (from both, your set of views and from the query images) – Micka Aug 10 '15 at 16:35
  • I'm using the [MPI-Sintel](http://sintel.is.tue.mpg.de/) dataset, which contains no lens distortion (it is synthesized from Blender). My distortion coefficients are set to 0. – MM. Aug 10 '15 at 16:53
  • It seems somebody else has the same problem: http://stackoverflow.com/questions/31898698/pose-estimation-solvepnp-and-epipolar-geometry-do-not-agree – aledalgrande Aug 10 '15 at 19:48

0 Answers0