4

In the hope for a broader audience, I repost my question here which I asked on answers.opencv.org as well.

TL;DR: What relation should hold between the arguments passed to undistortPoints, findEssentialMat and recoverPose?

I have code like the following in my program, with K and dist_coefficients being camera intrinsics and imgpts. matching feature points from 2 images.

     Mat mask; // inlier mask
     undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
     undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);

     Mat E = findEssentialMat(imgpts1, imgpts2, 1, Point2d(0,0), RANSAC, 0.999, 3, mask);
     correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
     recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);

I undistort the Points before finding the essential matrix. The doc states that one can pass the new camera matrix as the last argument. When omitted, points are in normalized coordinates (between -1 and 1). In that case, I would expect that I pass 1 for the focal length and (0,0) for the principal point to findEssentialMat, as the points are normalized. So I would think this to be the way:

  1. Possibility 1 (normalize coordinates)

     Mat mask; // inlier mask
     undistortPoints(imgpts1, imgpts1, K, dist_coefficients);
     undistortPoints(imgpts2, imgpts2, K, dist_coefficients);
     Mat E = findEssentialMat(imgpts1, imgpts2, 1.0, Point2d(0,0), RANSAC, 0.999, 3, mask);
     correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
     recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
    
  2. Possibility 2 (do not normalize coordinates)

     Mat mask; // inlier mask
     undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
     undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
     double focal = K.at<double>(0,0);
     Point2d principalPoint(K.at<double>(0,2), K.at<double>(1,2));
     Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 3, mask);
     correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
     recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask);  
    

However, I have found, that I only get reasonable results when I tell undistortPoints that the old camera matrix shall still be valid (I guess in that case only distortion is removed) and pass arguments to findEssentialMat as if the points were normalized, which they are not.

Is this a bug, insufficient documentation or user error?

Update

It might be that correctedMatches should be called with (non-normalised) image/pixel coordinates and the Fundamental Matrix, not E, this may be another mistake in my computation. It can be obtained by F = K^-T * E * K^-1

Community
  • 1
  • 1
oarfish
  • 4,116
  • 4
  • 37
  • 66
  • Hi @oarfish, were you able to solve this? Having many troubles to get consistent ego motion estimation, no matter the combination of things I try – Employee Jun 26 '20 at 12:14
  • Well I did answer below. If it doesn't work for you, it's most likely the quality of your data, or coordinate system differences, if the data comes from outside opencv. – oarfish Jun 27 '20 at 11:10
  • Thanks. Just a few questions: i) You use `undistortPoints` while I don't, as I undistort the image directly (do you also undistort the image, and then further undistort the points?), ii) You use the camera matrix `K` (don't you use getOptimalCameraMatrix to refine the camera matrix?), iii) I don't use `correctMatches` (maybe this very important?) – Employee Jun 28 '20 at 04:07
  • You should undistort only once, but be aware that when you use `undistortPoints()`, you have to take care that with default arguments, the resulting points will be in normalized images coordinates, not pixel coordinates. i think `getOptimalCameraMatrix()` is for when you crop the image to the valid area after full-image undistortion. correctMatches I believe just uses feature matches and the camera geometry to nudge the feature points onto epipolar lines. That improves the quality perhaps, but the principle should work without that, if you trust your matches. – oarfish Jun 30 '20 at 10:18

1 Answers1

0

As it turns out, my data seemingly is off. By using manually labelled correspondences I determined that Possibility 1 and 2 are indeed the correct ones, as one would expect.

oarfish
  • 4,116
  • 4
  • 37
  • 66
  • Your example above uses point (0, 0) as the pp parameter. Shouldn't that be the center of your image instead, like (320, 240) for a 640x480 image? – Derek Simkowiak Sep 23 '15 at 19:30
  • Which one do you mean? When calling `undistortPoints` like in the 1st example, the image points are normalised and then you need to use `(0,0)` as the principal point for further computations. – oarfish Sep 24 '15 at 12:39