0

I'm currently trying to get the relative position of two Kinect v2s by getting the position of a tracking pattern both cameras can see. Unfortunately I can't seem to get the correct position of the patterns origin.

This is my current code to get the position of the pattern relative to the camera:

std::vector<cv::Point2f> centers;
cv::findCirclesGrid( registeredColor, m_patternSize, centers, cv::CALIB_CB_ASYMMETRIC_GRID );
cv::solvePnPRansac( m_corners, centers, m_camMat, m_distCoeffs, m_rvec, m_tvec, true );

// calculate the rotation matrix 
cv::Matx33d rotMat;
cv::Rodrigues( m_rvec, rotMat );

// and put it in the 4x4 transformation matrix
transformMat = matx3ToMatx4(rotMat);

for( int i = 0; i < 3; ++i )
    transformMat(i,3) = m_tvec.at<double>(i);

transformMat = transformMat.inv();

cv::Vec3f originPosition( transformMat(0,3), transformMat(1,3), transformMat(2,3) );

Unfortunately, when I compare originPosition to the point in the pointcloud that corresponds to the origin found in screenspace (saved in centers.at(0) above) I get a very different result. The Screenshot below shows the pointcloud from the kinect with the point at the screenspace position of the pattern's origin in red in the red circle and the point at originPosition in light blue in the light blue circle. The screenshot was taken from directly in front of the pattern. The originPosition is also a bit more to the front. Pattern origin found in screenspace as red dot in red circle and with solvePnP 3D position as light blue dot in the light blue circle

As you can see, the red dot is perfectly in the first circle of the pattern while the blue dot corresponding to originPosition is not even close. Especially it is definitely not just a scaling issue of the vector from camera to origin. Also, the findCirclesGrid is done on the registered color image and the intrinsic parameters are taken from the camera itself to ensure that there is no difference in those between the image and the calculation of the point cloud.

Jay Tea
  • 43
  • 5

1 Answers1

1

You have transormation P->P' given by R|T, To get inverse transformation P'->P given by R'|T' just do:

R' = R.t();
T' = -R'* T;

And then

P = R' * P' + T'
Kamil Szelag
  • 708
  • 3
  • 12
  • Isn't that the same as inverting R|T (padded to a 4x4 matrix)? – Jay Tea Oct 04 '17 at 08:23
  • I just implemented it like you suggested, but it still gives me the same wrong result as before. Do I understand correctly that P would be a point in the camera coordinate system and P' a point in the pattern's coordinate system? So if I just want to get the center of the pattern's system according to the camera's coordinate system, that would just be T', correct? – Jay Tea Oct 04 '17 at 09:41
  • 1
    This was actually correct. I picked this up after a long while of not working on it and discovered that the problem was earlier in the pipeline in the transformation of the points. – Jay Tea May 28 '18 at 15:43