I am trying to compute the relative pose between two cameras using their captured images through the usual way of feature correspondences. I use these feature matches to compute the essential matrix, decomposing which results in the rotation and translation between both the views. I am currently using findEssentialMat
and recoverPose
functions in OpenCV to achieve this. Once I compute this relative pose:
How can I find the uncertainty of this measurement? Should I try to refine the essential matrix itself (using the epipolar error), which results in the essential matrix's covariance and is it possible to find the pose covariance from this? Or is there another way to find the uncertainty in this pose directly?
There is also another issue in play here: While I am computing the relative pose of camera C2 (call it P2) from camera C1, the pose of camera C1 (say P1) would have its own covariance. How does this affect P2?