-1

I have 2 images for the same object from different views. I want to form a camera calibration, but from what I read so far I need to have a 3D world points to get the camera matrix. I am stuck at this step, who can explain it to me

sasa
  • 11
  • 1

1 Answers1

1

Popular camera calibration methods use 2D-3D point correspondences to determine the projective properties (intrinsic parameters) and the pose of a camera (extrinsic parameters). The most simple approach is the Direct Linear Transformation (DLT).

You might have seen, that often planar chessboards are used for camera calibrations. The 3D coordinates of it's corners can be chosen by the user itself. Many people choose the chessboard being in x-y plane [x,y,0]'. However, the 3D coordinates need to be consistent.

Coming back to your object: Span your own 3D coordinate system over the object and find at least six spots, from which you can determine easy their 3D position. Once you have that, you have to find their corresponding 2D positions (pixel) in your two images.

There are complete examples in OpenCV. Maybe you get a better picture when reading the code.

Adrian Schneider
  • 1,427
  • 1
  • 11
  • 17
  • Hi Thank you for your response. I did see the chessboard code in openGL but I don't want to use a reference object. What I understand from you is that I have to select a random 3D coordinates for a point or something a proximate, so the process is manually an not 100% accurate. Did I get that right ? I wanted to be able to calculate the 3D coordinates for few points from the two images – sasa Feb 08 '15 at 20:08