5

This is the first time I do the image processing. So I have a lot of questions: I have two pictures which are taken from different position, one from the left and the other one from the right like the picture below.[![enter image description here][1]][1]

Step 1: Read images by using imread function

  I1 = imread('DSC01063.jpg');

  I2 = imread('DSC01064.jpg');

Step 2: Using camera calibrator app in matlab to get the cameraParameters

  load cameraParams.mat 

Step 3: Remove Lens Distortion by using undistortImage function

  [I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same');

  [I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same');

Step 4: Detect feature points by using detectSURFFeatures function

  imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);

  imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);

Step 5: Extract feature descriptors by using extractFeatures function

  features1 = extractFeatures(rgb2gray(I1), imagePoints1);

  features2 = extractFeatures(rgb2gray(I2), imagePoints2);

Step 6: Match Features by using matchFeatures function

  indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1);

  matchedPoints1 = imagePoints1(indexPairs(:, 1));

  matchedPoints2 = imagePoints2(indexPairs(:, 2));

From there, how can I construct the 3D point cloud ??? In step 2, I used the checkerboard as in the picture attach to calibrate the camera[![enter image description here][2]][2]

The square size is 23 mm and from the cameraParams.mat I know the intrinsic matrix (or camera calibration matrix K) which has the form K=[alphax 0 x0; 0 alphay y0; 0 0 1].

I need to compute the Fundamental matrix F, Essential matrix E in order to calculate the camera matrices P1 and P2, right ???

After that when I have the camera matrices P1 and P2, I use the linear triangulation methods to estimate 3D point cloud. Is it the correct way??

I appreciate if you have any suggestion for me?

Thanks!

Dima
  • 38,860
  • 14
  • 75
  • 115
TRI TRAN
  • 51
  • 2
  • Sorry! I can not post the pictures. – TRI TRAN Aug 10 '15 at 23:16
  • If you set the `OutputView` parameter of `undistortImage` to `same`, then you do not have to care about the `newOrigin`, because it is [0 0]. – Dima Aug 12 '15 at 13:43
  • @TRITRAN , did you get your code to work with 2 images ? if so can you show me the full code please, i need it for my project, its the last part needed to complete it, thanks – Zame Dec 19 '15 at 13:21

2 Answers2

1

To triangulate the points you need the so called "camera matrices" and the points in 2D in each of the images (that you already have).

In Matlab you have the function triangulate, that does the job for you.

If you have calibrated the cameras, you shoudl have this information already. Anyways, you have here an example of how to create the "stereoParams" object needed for the triangulation.

Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
  • @TRITRAN if it was helpful, consider accepting it as an aswer – Ander Biguri Aug 11 '15 at 14:23
  • I still confuse, if I use function triangulate like this [worldPoints,reprojectionErrors] = triangulate(matchedPoints1,matchedPoints2,cameraMatrix1,cameraMatrix2) I need cameraMatrix1,cameraMatrix2, but I don't have these camera matrices. For the reason of my project, I can't have the checkerboard in the two 2D images. – TRI TRAN Aug 11 '15 at 14:28
  • @TRITRAN If you dont have those values, I dont know if you can get the world coordinates......You need to know where the cameras are and their relationship to be able to get the 3D coords. Probably you can find some state of the art algorithm in a research paper that does what you want, but you probably can with "standard" ways. – Ander Biguri Aug 11 '15 at 14:31
  • I have the cameraParams.mat in Step 2 and all the pictures, but I can't post them here. I used 16 pictures of checkerboard to calibrate the camera and in the file cameraParams.mat I had some parameters like RotationMatrices (3x3x16 double), TranslationVectors (16x3 double), IntrinsicMatrix (3x3 double) and so on.... – TRI TRAN Aug 11 '15 at 14:41
0

Yes, that is the correct way. Now that you have matched points, you can use estimateFundamentalMatrix to compute the fundamental matrix F. Then you get the essential matrix E by multiplying F by extrinsics. Be careful about the order of multiplication, because the intrinsic matrix in cameraParameters is transposed relative to what you see in most textbooks.

Now, you have to decompose E into a rotation and a translation, from which you can construct the camera matrix for the second camera using cameraMatrix. You also need the camera matrix for the first camera, for which the rotation would be a 3x3 identity matrix, and translation will be a 3-element 0 vector.

Edit: there is now a cameraPose function in MATLAB, which computes an up-to-scale relative pose ('R' and 't') given the Fundamental matrix and the camera parameters.

Dima
  • 38,860
  • 14
  • 75
  • 115
  • Hi Dima! Thank you so much. Next step, I compute the fundamental matrix F. And from camera calibration matrix K ( which has the form K=[alphax 0 x0; 0 alphay y0; 0 0 1]), I find the essential matrix E = K' * F * K – TRI TRAN Aug 14 '15 at 12:51
  • And I assume camera matrix P1 = K * eye(3,4). Then decompose E: [U,S,V] = svd(E) and I get 4 solutions for camera matrix P2 – TRI TRAN Aug 14 '15 at 12:54
  • P2(:,:,1) = [U * W * V ', u3]; P2(:,:,2) = [U * W * V ', -u3]; P2(:,:,3) = [U * W ' * V ', u3]; P2(:,:,4) = [U * W ' * V ', -u3]; – TRI TRAN Aug 14 '15 at 12:55
  • After that, I find the corrected one P2final. By using, linear triangulation method, I can estimate the 3D pointcloud. Do I need to compute the reprojection error in order to eliminate the noisy points ??? And is that the correct way ? – TRI TRAN Aug 14 '15 at 13:05