0

I am using the five point algorithm through the findEssentialMat() function in OpenCV 2.4.11 to compute the relative pose of one camera with respect to another. For initial tests, I kept two cameras separated in X and took multiple pictures. As the pictures show, the detected (and matched) features between the images remain mostly the same as there is no movement; yet the essential matrix and thereby the epipolar lines are varying a lot: which in turn is affecting my pose. To try to improve the accuracy of the E matrix, I also tried running the algorithm twice: first with RANSAC to filter out the outliers, and then the LMEDS algorithm, yet I cannot see a lot of improvement. Especially between pictures 1 and 2, there is a massive change.

Image1Image2 Image3

Any pointers on what might be changing/going wrong? I am aware that RANSAC selection can cause different samples to be picked up every time, but as all the features should have the same "transformation" so to speak, shouldn't the epipolar lines still be similar no matter which ones are the final inliers? Additionally, the five point algorithm paper states that it does not have the issue of failure in cases where all points are coplanar. Is there any way I can improve the essential matrix computation?

Thanks for your time!

HighVoltage
  • 722
  • 7
  • 25
  • I am assuming here that your cameras are calibrated since you are talking about essential matrix. As you said RANSAC selection is the source of your randomness. Ideally, if all your feature matches were correct, you would be right to assume that no matter which N matches are picked, the final transformation would remain the same. However, feature matching is not perfect. In your images too there would be some mismatches (I can see some in the top right). You can try to modify RANSAC iterations by changing the `prob` parameter to a value close to 1 in `findEssentialMat` and see if that helps. – rs_ Aug 21 '15 at 20:26
  • Hi, yes, my cameras are calibrated. I have tried setting the probability to 0.99 and threshold to 1; but nothing really seems to help. But, strangely, it works much better if I use the same camera to take pictures from multiple views. My cameras are the same model but they are mono and color (so I am using mono images), and have slightly different f and principal point. Would that be an issue? These slight errors in the pose are throwing the triangulation out of whack. – HighVoltage Aug 22 '15 at 18:04
  • To determine whether or not calibration errors are causing the problem do `findFundamentalMat` with similar RANSAC parameters and plot epipolar lines. Do they also change drastically? It would also help if you can provide your code snippet and images. – rs_ Aug 24 '15 at 20:05

0 Answers0