I'm trying to compute the extrinsic matrix from the pose (position and orientation) of the camera given in world coordinates. I used the following to compute the extrinsic matrix,
T = [R -Rt; 0 1] 3x4 Matrix
The rotation(theta2) of the camera is about the Y-axes of the camera i.e. yaw about the camera axis. The translation vector is [x, y, z] in meters.
theta1 = deg2rad(theta1);
RW1 = [[cos(theta1), 0, sin(theta1)];
[0,1, 0];
[-sin(theta1), 0, cos(theta1)]];
tW1 = [0; 0; 0];
TW1 = [RW1 tW1; 0 0 0 1];
theta2 = deg2rad(theta2);
R12 = [[cos(theta2), 0, sin(theta2)];
[0,1, 0];
[-sin(theta2), 0, cos(theta2)]];
t12 = [x; y; z];
T12 = [R12 t12; 0 0 0 1];
P1 = K * TW1;
P2 = K * T12;
K - Camera Intrinsic Matrix
Is this the right way to calculate the extrinsic matrix? Am I missing any transformations between the world and the camera frame?
I'm trying to implement this https://www.cs.cmu.edu/~16385/s17/Slides/11.4_Triangulation.pdf and for the camera matrix, I followed this https://www.cs.cmu.edu/~16385/s17/Slides/11.1_Camera_matrix.pdf.