1

I am trying to compute the 3D coordinates from several pair of two view points.
First, I used the matlab function estimateFundamentalMatrix() to get the F of the matched points (Number > 8) which is:

F1 =[-0.000000221102386   0.000000127212463  -0.003908602702784
     -0.000000703461004  -0.000000008125894  -0.010618266198273
      0.003811584026121   0.012887141181108   0.999845683961494]

And my camera - taken these two pictures - was pre-calibrated with the intrinsic matrix:

K = [12636.6659110566, 0, 2541.60550098958
     0, 12643.3249022486, 1952.06628069233
     0, 0, 1]

From this information I then computed the essential matrix using:

E = K'*F*K

With the method of SVD, I finally got the projective transformation matrices:

P1 = K*[ I | 0 ] 

and

P2 = K*[ R | t ]

Where R and t are:

R = [ 0.657061402787646 -0.419110137500056  -0.626591577992727
     -0.352566614260743 -0.905543541110692   0.235982367268031
     -0.666308558758964  0.0658603659069099 -0.742761951588233]

t = [-0.940150699101422
      0.320030970080146
      0.117033504470591]

I know there should be 4 possible solutions, however, my computed 3D coordinates seemed to be not correct.
I used the camera to take pictures of a FLAT object with marked points. I matched the points by hand (which means there should not be obvious mistake exists about the raw material). But the result turned out to be a surface with a little bit banding.
I guess this might be due to the reason pictures did not processed with distortions (but actually I remember I did).

I just want to know whether this method to solve the 3D reconstruction issue right? Especially when we already know the camera intrinsic matrix.

Edit by JCraft at Aug.4: I have redone the process and got some pictures showing the problem, I will write another question with detail then post the link.

Edit by JCraft at Aug.4: I have posted a new question: Calibrated camera get matched points for 3D reconstruction, ideal test failed. And @Schorsch really appreciate your help formatting my question. I will try to learn how to do inputs in SO and also try to improve my gramma. Thanks!

Community
  • 1
  • 1
JCraft
  • 33
  • 4
  • Welcome to SO, JCraft! Could you please clarify, how your result differed from the expected output? Maybe you can upload example figures (e.g. on http://tinypic.com/) and link to them? Currently an answer to your question may simply be *yes* - which may not be what you are after. However, it is difficult to understand *a little bit banding* without seeing the original and the processed picture. – Schorsch Jul 31 '14 at 11:14
  • Hello Schorsch! Thanks for your reply. Yes you are right it is not easy to image how the problem is. I will try to redo the process and get some pictures to be uploaded. – JCraft Aug 04 '14 at 05:20

2 Answers2

0

If you only have the fundamental matrix and the intrinsics, you can only get a reconstruction up to scale. That is your translation vector t is in some unknown units. You can get the 3D points in real units in several ways:

  • You need to have some reference points in the world with known distances between them. This way you can compute their coordinates in your unknown units and calculate the scale factor to convert your unknown units into real units.
  • You need to know the extrinsics of each camera relative to a common coordinate system. For example, you can have a checkerboard calibration pattern somewhere in your scene that you can detect and compute extrinsics from. See this example. By the way, if you know the extrinsics, you can compute the Fundamental matrix and the camera projection matrices directly, without having to match points.
  • You can do stereo calibration to estimate the R and the t between the cameras, which would also give you the Fundamental and the Essential matrices. See this example.
Dima
  • 38,860
  • 14
  • 75
  • 115
  • Hi Dima, really appreciate your answer! Yes I know what you mean about calibrating the camera using checkerboard. And actually I used the checkerboard and calibration toolbox from caltech in matlab to get the intrinsic parameters (also did in OpenCV, similar result). – JCraft Aug 04 '14 at 04:08
  • Sry last comment was unable to be edited. What I am trying to test is such a condition: we want to use our camera to take pictures of a very large thing (for instance a tube 10m*10m*10m) and find out some points' 3D coordinates. In this situation, it is not easy to include the checkerboard in every picture. So the extrinsic parameters are not able to be computed directly. That is why I want to use known intrinsic matrix with matched points to get the extrinsic matrix, then do the reconstruction. – JCraft Aug 04 '14 at 05:18
  • In that case, you can try having a checkerboard in the field for each pair of cameras. So that cam1 and cam2 would have a common reference coordinate system, then cam2 and cam3, then cam3 and cam4, ... etc. If you have MATLAB version R2013b or later with the Computer Vision System Toolbox, try the Camera Calibrator app and the associated functions. http://www.mathworks.com/help/vision/ug/find-camera-parameters-with-the-camera-calibrator.html – Dima Aug 04 '14 at 12:34
  • That's really a good idea! I am trying to use some kind of calibration tool(may contain 8 or 10 points, but not checkerboard that much) to get for each pair of views a reference. I am doing the experiment. And the function I would like to first try cvFindExtrinsicParams2() in OpenCV – JCraft Aug 05 '14 at 03:39
  • Check out the `extrinsics` function in the Computer Vision System Toolbox for MATLAB. – Dima Aug 05 '14 at 13:16
0

Flat objects are critical surfaces, not possible to achive your goal from them. try adding two (or more) points off the plane (see Hartley and Zisserman or other text on the matter if still interested)

victor
  • 1