2

I am trying to compute the relative pose between two cameras using their captured images through the usual way of feature correspondences. I use these feature matches to compute the essential matrix, decomposing which results in the rotation and translation between both the views. I am currently using findEssentialMat and recoverPose functions in OpenCV to achieve this. Once I compute this relative pose:

  1. How can I find the uncertainty of this measurement? Should I try to refine the essential matrix itself (using the epipolar error), which results in the essential matrix's covariance and is it possible to find the pose covariance from this? Or is there another way to find the uncertainty in this pose directly?

  2. There is also another issue in play here: While I am computing the relative pose of camera C2 (call it P2) from camera C1, the pose of camera C1 (say P1) would have its own covariance. How does this affect P2?

HighVoltage
  • 722
  • 7
  • 25

1 Answers1

3

1) You should refine your pose estimation directly through bundle adjustment, and compute the Hessian of the cost function at the optimum, whose inverse will yield the covariance you seek. Some BA packages (e.g. Ceres) have APIs to facilitate that.

2) Irrelevant. Lacking an absolute reference, all you can hope for is an estimation of the uncertainty on the relative pose. Putting it in another way, if all you have is measurements of relative motion between two cameras, you may as well assume that one is certain and attribute the motion uncertainty entirely to the pose of the other.

Francesco Callari
  • 11,300
  • 2
  • 25
  • 40
  • Thank you! Just a quick clarification regarding point 1: is the cost function here related to minimizing the reprojection error of the measurements or minimizing the epipolar error? I see the former being used frequently in Ceres-like implementations, usually when computing poses through 3D-2D correspondences. – HighVoltage Jan 05 '18 at 17:35
  • Reprojection error - the "bundle" you adjust is the set of rays projecting 3D points into the cameras. – Francesco Callari Jan 06 '18 at 00:27
  • So these are the 3D points arising from triangulation using that particular relative pose: which means I need to compute a sparse reconstruction after the pose computation? (even though I already have a 3D map of sorts: what I'm looking at particularly in my application is fusing individual (feature tracking and PNP) and relative pose measurements) – HighVoltage Jan 06 '18 at 02:09
  • Correct. You'll need to solve jointly for structure and motion. – Francesco Callari Jan 06 '18 at 05:29
  • Just a bit of a follow-up: I was able to implement covariance estimation using Ceres, which encodes the quality of the solution while optimizing the reprojection errors. This, I am assuming, encodes the covariance of the optimal pose w.r.t. the reprojection errors. I have seen some cases where the estimate is somewhat far from the ground truth, but the optimization is still 'confident' enough to give me a low covariance estimate. How do I relate real-world uncertainty to the solution's uncertainty I obtained? – HighVoltage Jan 24 '18 at 18:02