4

I'm using opencv2.4.6, and kinect sdk to do multi kinect calibration. After I get image data from kinects, I turn them into opencv images, and follow some tutorials, e.g. RGBDemo, and use the following pipeline:

//find the corners
cv::findChessboardCorners(*image, patternSize, corners, CV_CALIB_CB_NORMALIZE_IMAGE|CV_CALIB_CB_ADAPTIVE_THRESH);
cvtColor(*image, gray_image, CV_BGR2GRAY);
cornerSubPix(gray_image, corners, cv::Size(5,5), cv::Size(-1,-1), cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 ));

//after I collect 20 sets of corners, do calibration
CalcCameraIntrinsic(corner_src, rgb_intr_src, coeff_src);
CalcCameraIntrinsic(corner_dist, rgb_intr_dist, coeff_dist);

cv::stereoCalibrate(patternPoints, corner_src, corner_dist, rgb_intr_src, coeff_src, rgb_intr_dist, coeff_dist,
    cv::Size(width, height), R, T, E, F, 
    cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 50, 1e-6), cv::CALIB_FIX_INTRINSIC);

I think my corner positions are right because I use drawChessboardCorners and find no error. After all these steps, I get a rotation matrix and translation vector. When I do these transform to the point clouds I get from the kinects, I found they are not aligned.

No idea about why. I don't think it's about the order of images. No matter which point cloud I apply the transform to, I can't get the right alignment. The only reason I guess is the parameters of opencv function.

Thanks for you attention!!!

8-20 Edit: Although no one answer me, I get one possible reason: The point cloud is based on pixel, while the matrix I got from opencv is based on meters. I have changed the point cloud to meters, but it's also not good. I found that the matrix I got is possibly right. So I doubt it's maybe something wrong with my display function. I will post the conclusion if I find the answer.

8-21 Edit: I have found out the reason. I made a mistake in the difference between opencv and opengl. Now the matrix can align the two point clouds, but not very perfect.

0 Answers0