0

Detail:
I use solvePnP() in OpenCV3 in ROS Kinetic to estimate the pose of my robot by led markers.
The camera resulation is 128*128.
When I run my code in opencv it always return ridiculous results.
It seems that I did not understand PNP but MATLAB gets more reasonable results.
I still confused after I read OpenCV and Matlab Documents. I do not know where I get wrong.

Here is c++ code:

vector<Point2f> p2d;
p2d.push_back(Point2f(65,55));
p2d.push_back(Point2f(91,57));
p2d.push_back(Point2f(63,83));
p2d.push_back(Point2f(90,85));

vector<Point3f> p3d;
p3d.push_back(Point3f(0.687076330185,0.966806054115,1.80424296856));//topleft
p3d.push_back(Point3f(-0.18462985754,0.96837836504,1.79057383537));//topright
p3d.push_back(Point3f(0.691743910313,0.056151561439,1.73733282089));//bottonleft
p3d.push_back(Point3f(-0.179948031902,0.0538983009756,1.74941062927));//bottonright

Mat cameraMatrix = Mat::zeros(3,3,CV_64FC1);//fx,0,u0;0,fy,v0;0,0,1
cameraMatrix.at<double>(0,0) = 1.138594e+02;
//cameraMatrix.at<double>(0,1) = -0.0797;
cameraMatrix.at<double>(1,1) = 1.139064e+02;
cameraMatrix.at<double>(0,2) = 7.63093e+01;
cameraMatrix.at<double>(1,2) = 6.68583e+01;
cameraMatrix.at<double>(2,2) = 1;
Mat distortion = Mat::zeros(5,1,CV_64FC1);//k1,k2,p1,p2,k3
distortion.at<double>(0,0) = -2.837e-01;
distortion.at<double>(1,0) = 0.0882e+00;
distortion.at<double>(2,0) = -0.001774514973451;
distortion.at<double>(3,0) = 6.2359e-05;
distortion.at<double>(4,0) = 7.973409664780424e-04;

Mat rvec = Mat::zeros(3,1,CV_64FC1);
Mat tvec = Mat::zeros(3,1,CV_64FC1);
solvePnP(Mat(p3d),Mat(p2d),cameraMatrix,distortion,rvec,tvec,false,CV_ITERATIVE);

The resuls of CV_ITERATIVE function is shown below:

pose: 
position: 
x: 0.498402438763
y: 0.409237140455
z: 1.87770535106
orientation: 
x: -0.0660728111319
y: 0.0615960086814
z: 0.995333090477
w: -0.0339463450942

The resuls of CV_EPNP function is shown below:

pose: 
position: 
x: 0.0447529023645
y: 0.640655332601
z: 1.92705476321
orientation: 
x: 0.0605125209219
y: -0.00392024438163
z: 0.997734267547
w: -0.0291409996095

The resuls of CV_P3P function is shown below:

pose: 
position: 
x: 0.609777855777
y: 0.962804613723
z: 2.20123033932
orientation: 
x: -0.114062563556
y: -0.0956252432628
z: 0.988152306053
w: -0.0374241163864

Here is Matlab code:

tangentialDistortion = [-0.001774514973451,6.230176254781805e-05];
radialDistortion = [-0.283675926885368,0.088238919987596,7.973409664780424e-04]; 

IntrinsicMatrix = [113.859398285567 0   0
        -0.0797237890933342 113.906402839770    0
        76.3090526736352    66.8576711008518    1];
cameraParams = cameraParameters('IntrinsicMatrix',IntrinsicMatrix,'RadialDistortion',radialDistortion,'TangentialDistortion',tangentialDistortion);
imagePoints = [65,55;91,57;63,83;90,85];
[undistortedPoints, reprojectionErrors] = undistortPoints(imagePoints,cameraParams);
worldPoints = [0.687080681324,0.966781198978,1.80424845219;
           -0.18462985754,0.96837836504,1.79057383537;
           0.691743910313,0.056151561439,1.73733282089;
           -0.179948031902,0.0538983009756,1.74941062927
          ];
[worldOrientation,worldLocation] = estimateWorldCameraPose(undistortedPoints,worldPoints,cameraParams);

The resuls of Matlab(P3P) function is shown below:

orientation:
-0.990479259493302  0.0577122306046552  -0.124980538293912
-0.0690583764694848 -0.993684823835936  0.0884387444367233
-0.119087266964189  0.0962276951650630  0.988209721430740
location:
0.734846969941507   0.244533065842483   -1.87846167697158

Groundtruth position of camera holder:

//ABC are 3 leds on camera holder
0.513234317303 ,0.13388569653   ,-1.93677461147  //A
0.199585437775 ,0.0487480945885 ,-1.94162213802  //B
0.0308324769139,0.00147355603985,-1.9436986446   //C
0.2478840773   ,0.0613691157,    -1.9406984647   //average

Can someone help me?

This problem has troubled me for a long time and I could not find any related answer.
Thank you for your time!

  • `0.2478840773 ,0.0613691157, -1.9406984647 //average` If it is `(X,Y,Z)` that should correspond to `tvec`, how Z can be negative? `rvec` and `tvec` returned by `solvePnP()` correspond to the transformation from object frame to camera frame. Camera frame is right handed as shown [here](https://docs.opencv.org/3.4.1/d9/d0c/group__calib3d.html#details). Maybe negative Z is not handled in OpenCV? Looks like Matlab is column-major order if I look at the Matlab intrinsics? – Catree Mar 29 '18 at 09:40
  • Also, `rvec` is a Rodrigues rotation vector (`3x1`). Looks like you convert it to quaternion? Anyway, you can use `Rodrigues()` to convert from `rvec` to `R`. if you want to debug more easily, you should use simple data to test if the OpenCV and Matlab code return correct result. Simple 3D points, known intrinsics, `rvec`, `tvec`, project 3D points to get 2D points and feed the data to the two methods. – Catree Mar 29 '18 at 09:40
  • Yep,the groundtruth is (X,Y,Z) and all these values are from mocap which Y axis is up. I will test on your suggestion in these days then feedback here.Thank you! – Brian Chan Mar 29 '18 at 11:58

0 Answers0