7

stereoParameters takes two extrinsic parameters: RotationOfCamera2 and TranslationOfCamera2.

The problem is that the documentation is a not very detailed about what RotationOfCamera2 really means, it only says: Rotation of camera 2 relative to camera 1, specified as a 3-by-3 matrix.

What is the coordinate system in this case ?

A rotation matrix can be specified in any coordinate system.

What does it exactly mean "the coordinate system of Camera 1" ? What are its x,y,z axes ?

In other words, if I calculate the Essential Matrix, how can I get the corresponding RotationOfCamera2 and TranslationOfCamera2 from the Essential Matrix ?

Dima
  • 38,860
  • 14
  • 75
  • 115
jhegedus
  • 20,244
  • 16
  • 99
  • 167

2 Answers2

4

RotationOfCamera2 and TranslationOfCamera2 describe the transformation from camera1's coordinates into camera2's coordinates. A camera's coordinate system has its origin at the camera's optical center. Its X and Y-axes are in the image plane, and its Z-axis points out along the optical axis.

enter image description here

Equivalently, the extrinsics of camera 1 are identity rotation and zero translation, while the extrinsics of camera 2 are RotationOfCamera2 and TranslationOfCamera2.

If you have the Essential matrix, you can decompose it into the rotation and a translation. Two things to keep in mind. First, the translation is up to scale, so t will be a unit vector. Second, the rotation matrix will be a transpose of what you get from estimateCameraParameters, because of the difference in the vector-matrix multiplication conventions.

Out of curiosity, what is it that you are trying to accomplish? Are you working with a single moving camera? Otherwise, why not use the Stereo Camera Calibrator app to calibrate your cameras, and get rotation and translation for free?

Dima
  • 38,860
  • 14
  • 75
  • 115
  • Thank you Dima for the answer, I am following Zissman's approach to reconstruct a scene from two pictures. The problem with the Stereh Calibrator app is that it only works for stereo cameras, it needs several image pairs for calibration. But I only have one image pair because the relative camera positions are not fixed. The main goal would be to take two pictures with the same camera from different angles and then get a dense pointcloud out of it. – jhegedus Mar 17 '15 at 09:14
  • @Dima, "Second, the rotation matrix will be a transpose of what you get from estimateCameraParameters, because of the difference in the vector-matrix multiplication conventions." Can you clarify this statement? – Pedro77 Jun 07 '16 at 19:25
  • It is just a matter of convention. In most textbooks, it is matrix times column vector. In Matlab, in the Image Processing Toolbox and the Computer Vision System Toolbox, the convention is row vector times matrix. That means the matrix must be transposed, as per the rules of matrix multiplication. – Dima Jun 09 '16 at 14:59
2

Suppose for left camera's 1st checkerboard (or to any world reference) rotation is R1 and translation is T1, right camera's 1st checkerboard rotation is R2 and translation is T2, then you can calculate them as follows;

RotationOfCamera2 = R2*R1'; TranslationOfCamera2= T2-RotationOfCamera2*T1

But please note that this calculations are just for one identical checkerboard reference. Inside matlab these two parameters are calculated by all given pair of checkerboard images and calculate median values as initial guess. Later these parameters will be refine by nonlinear optimization. So after median calculations they might be sigtly differ. But if you have just one reference point tranfomation for both two camera, you should use above formula. Note Dima told, matlab's rotation matrix is transpose of normal usage. So I wrote it as how the literature tells not matlab's style.

M. Balcilar
  • 518
  • 5
  • 8