Multi-camera calibration with overlapping views is straightforward. However, I want to calibrate a multi-camera rig without overlapping views, which is back-to-back. I looked at some resources and found that opencv's hand-eye calibration seems to solve my problem.
To verify my ideas, I assume a simple case where I use hand-eye calibration to calibrate a stereo camera setting. In my setup, two cameras construct a rig and face the same aruco board. My goal is to find the extrinsic between the two cameras.
Based on OpenCV documentation, I created the input as follows:
pripper -> left camera; base -> aruco board; target -> same board; cam -> right camera; board2leftCam or board2rightCam can be calculated using aruco::estimatePoseBoard()
Procedure:
R_gripper2base -> board2leftCam.inv();
R_target2cam -> board2rightCam.
R_cam2gripper -> right2left.
the output should be cam2gripper -> right camera to left camera. but the result is very different from stereo calibration.
Why is my calibration result wrong? Thank you in advance!
My Code:
/*
* R_lefCam2Board => R_gripper2base
*/
std::vector<cv::Mat> rvecMat_lefCam2Board, tvecMat_lefCam2Board;
for (size_t i = 0; i < lefCamfilenames.size(); ++i) {
cv::Mat matLef = cv::imread(lefCamfilenames[i]);
vector<int> ids;
vector<vector<Point2f>> corners, rejected;
aruco::detectMarkers(matLef, dictionary, corners, ids, params, rejected);
cv::Vec3d rvec, tvec; // from floor 2 bot camera
aruco::estimatePoseBoard(corners, ids, board, botCam.mtx, botCam.dist, rvec, tvec);
cv::Mat pose = vec3d2Mat44(rvec, tvec); // pose is double
cv::Mat pose_inv = pose.inv();
pose_inv.convertTo(pose_inv, CV_32F);
cv::Mat R_lefCam2Board, T_lefCam2Board;
mat44ToRT_(pose_inv, R_lefCam2Board, T_lefCam2Board);
rvecMat_lefCam2Board.push_back(R_lefCam2Board);
tvecMat_lefCam2Board.push_back(T_lefCam2Board);
}
/*
* R_Board2botCam => R_target2cam
*/
std::vector<cv::Mat> rvecMat_board2botCam, tvecMat_board2botCam;
for (size_t i = 0; i < botCamfilenames.size(); ++i) {
cv::Mat matBot = cv::imread(botCamfilenames[i]);
vector<int> ids;
vector<vector<Point2f>> corners, rejected;
aruco::detectMarkers(matBot, dictionary, corners, ids, params, rejected);
cv::Vec3d rvec, tvec; // from floor 2 bot camera
aruco::estimatePoseBoard(corners, ids, board, botCam.mtx, botCam.dist, rvec, tvec);
cv::Mat pose = vec3d2Mat44(rvec, tvec); // pose is double
pose.convertTo(pose, CV_32F);
cv::Mat R_bot2floor, T_bot2floor;
mat44ToRT_(pose_inv, R_bot2floor, T_bot2floor);
rvecMat_board2botCam.push_back(R_bot2floor);
tvecMat_board2botCam.push_back(T_bot2floor);
}
/*
* Calculate result R_Bot2Left => R_cam2gripper
*/
cv::Mat R_bot2lef, T_bot2lef;
cv::calibrateHandEye(rvecMat_lefCam2Board, tvecMat_lefCam2Board,
rvecMat_board2botCam, tvecMat_board2botCam,
R_bot2lef, T_bot2lef);