1

I'm working on a little project to align a camera to a fixed target (dimension in millimeter is known). The target have to be aligned horizontal and vertical centered in camera image, with no rotation. To achieve this the camera is mounted on a device with 6DoF so the camera can translate in x, y, z and rotate in pitch, roll, yaw. The target in the image is detected by using four markers in the corners of it. I tried the following but the alignment is not correct.

// Focal lenght in pixel for x and y is needed for the camera matrix    
double fx = img.getHeight() * img.getMetaDouble(Exif::FOCAL_LENGTH_35MM) / 24.;   

// Principle point of the camera
double center_x = 1931.3; 
double center_y = 1336.9;

// Creation of the camera matrix
cv::Mat camMatrix = (cv::Mat_<double>(3, 3) <<
                     fx,  0,  center_x,
                     0,  fx,  center_y,
                     0,  0,     1);

// worldPoints are in arbitrary coordinate system.
std::vector<cv::Point3d> worldPoints;
worldPoints.push_back(cv::Point3d(-target.width_mm / 2., -target.heigth_mm / 2., 0));
worldPoints.push_back(cv::Point3d(target.width_mm / 2., -target.heigth_mm / 2., 0));
worldPoints.push_back(cv::Point3d(target.width_mm / 2., target.heigth_mm / 2., 0));
worldPoints.push_back(cv::Point3d(-target.width_mm / 2., target.heigth_mm / 2., 0));

// imagePoints are the detected marker in the image
std::vector<cv::Point2d> imagePoints;
imagePoints.push_back(cv::Point2d(targetInImage.top_left.getX(), chartInImage.top_left.getY()));
imagePoints.push_back(cv::Point2d(targetInImage.top_right.getX(), chartInImage.top_right.getY()));
imagePoints.push_back(cv::Point2d(targetInImage.bottom_right.getX(), chartInImage.bottom_right.getY()));
imagePoints.push_back(cv::Point2d(targetInImage.bottom_left.getX(), chartInImage.bottom_left.getY()));

cv::Mat rVec, tVec;

// Solve for pose of object in image
cv::solvePnP(worldPoints, imagePoints, camMatrix, cv::noArray(), rVec, tVec);

cv::Mat rotationMatrix;
cv::Rodrigues(rVec, rotationMatrix);
rotationMatrix = rotationMatrix.t();
tVec = -rotationMatrix * tVec;

double* _r = rotationMatrix.ptr<double>();
double projMatrix[12] = {_r[0], _r[1], _r[2], 0,
                         _r[3], _r[4], _r[5], 0,
                         _r[6], _r[7], _r[8], 0
                        };

cv::decomposeProjectionMatrix(cv::Mat(3, 4, CV_64FC1, projMatrix),
                              cameraMatrix,
                              rotMatrix,
                              transVect,
                              rotMatrixX,
                              rotMatrixY,
                              rotMatrixZ,
                              eulerAngles);

// Allocate single results
double x = tVec.at<double>(2, 0);
double y = tVec.at<double>(0, 0);
double z = tVec.at<double>(1, 0);

double roll = eulerAngles[2];
double pitch = eulerAngles[0];
double yaw = eulerAngles[1];

The result values are not correct to move the camera in a way that the target is aligned like in the picture. What am I doing wrong? Did I mixed the different coordinate systems?

enter image description here

Community
  • 1
  • 1
RobRobRob
  • 67
  • 2
  • 10
  • Can you please tell us, what exactly is not working? Horizontal/Vertical alignment or rotations or something else? – xMutzelx Apr 05 '18 at 12:07
  • Neither translation nor rotation are correct. The Target is still tilted and not centered. But did I got it right that solvePnP solves for rotation/translation in respect to camera/target or vice versa? – RobRobRob Apr 05 '18 at 12:14

0 Answers0