its hard too find exactly the answer to my question, that's why I want to ask it here, even it is poorly explained somewhere else. I try to add the solution in code so that everyone can get back to my results. But first it has to be solved.
Its about triangulation with OpenCV cv::triangulatePoints()
. I know its poor documented but it can be found at willowgarage (now) (search for the documentation 2.4.9)
- I use for a chessboard calibration 22 stereo images (two cams) of different angles ect. for a stable intrinsic parameters.
- got the chessboard corners with
cv::findChessboardCorners()
- optional -
cv::cornerSubPix()
- create objectPoints
cv::Point3f( k*m_squareSize, j*m_squareSize,0.0f)
over all chessboardpoints (k - chessboard width, j chessboard height or vice versa, with squareSize = realworld size) - do 2.-4. for every image
- put everything in here:
cv::calibrateCamera(object_points1, image_points1, image.size(), intrinsic1, distCoeffs1, rvecs1, tvecs1);
this function works perfect like i wanted, i am doing it with the second camera images too. After long research I found this: cv::Mat R12 = R2*R1.t(); cv::Mat T12 = T2-R*T1;
This is for the relationship (R|T) between cam1 & cam2. This works pretty well, I've tested it with the results ofcv::stereoCalibrate()
.- At this step I wanted to use my tvecs and rvecs for a re-projection 2D too 3D and 3D too 2D. Last I got done with
cv::projectPoints(object_points1[i],rvecs1[i],tvecs1[i],intrinsic1, distCoeffs1, imagePoints1);
-> works fine - I got with 48*2*22 points a maximum from 2Pixel difference at only one single point. But know i can't go further with the triangulation from 2d too 3D. I have! too usecv::triangulatePoints()
. How do i get this work???
What I have done so far trying without an good result: You first need the projectionmatrices P1 and P2.
cv::Matx34d P1 = cv::Matx34d( R1.at<double>(0,0), R1.at<double>(0,1), R1.at<double>(0,2), tvecs1[0].at<double>(0),
R1.at<double>(1,0), R1.at<double>(1,1), R1.at<double>(1,2), tvecs1[0].at<double>(1),
R1.at<double>(2,0), R1.at<double>(2,1), R1.at<double>(2,2), tvecs1[0].at<double>(2));
sorry this looks heavy but its only the rotationmatrix (rodrigues(rvecs1,R1)) and the translationvector tvecs1. Is this wrong? do i have to inverse the rotationmatrix R1?
Next step: you need the imagePoints(Corners) in left and right image. These imagePoints are the undistorted corners which I got from findChessboardCorners().
After using cv::triangulatePoints(P1, P2, cv::Mat(undisPt_set1).reshape(1,2), cv::Mat(undisPt_set2).reshape(1,2), point3d);
point3d is a "4D" point where the fourth parameter has too be eliminated by convertPointsHomogeneous(point3d.reshape(4, 1), point3dCam1);
This is what I have done so far - but its not working. Does somebody know what I am doing wrong? Any wrong thoughts at the last steps? I've tryied the math but I am not sure about the projectionMatrix P1 and P2. I know its built like [R|t], but is it my tvecs and tvecs? is it transposed or inverse? Any help would be great, but pls help me out with some code or clear steps - not with more links where I should read and think, I realy did a research, I have the learning OpenCV book, OpenCV2 cookbook, and Hartley and Zisserman here in front of me. But I can't get to it.