I have two intel realsense cameras: CameraLeft and CameraRight .I'm trying to find the rotation matrix and transformation vector between these two cameras, so I can transform coordinate system between two cameras.
Here is what I have tried:
1.I use cv.calibrateCamera() to get intrinsic matrix: Mleft and Mright
2.I use cv.stereoCalibrate() to get the rotation matrix R and transformation vector T
3.I took two pictures in the same time, they are Pleft and Pright.
4.(uleft, vleft) and (uright, vright) indicating the same object in two pictures. Having the intrinsic matrix Mleft , Mright ,and the depth Dleft, Dright, and (uleft, vleft) , (uright, vright) , I get the camera coordinate (Xleft, Yleft, Zleft) and (Xright, Yright, Zright).
Here is the problem:
In my assuming, R*(Xleft, Yleft, Zleft) + T = (Xright, Yright, Zright). But the result is not that.
here is my codes:
import numpy as np
import h5py
import cv2
cameraMatLeft = np.array([
[
1286.540148375528,
0.0,
929.5596785987797
],
[
0.0,
1272.8889372475517,
586.0340979684613
],
[
0.0,
0.0,
1.0
]
])
cameraMatRight = np.array([
[
1294.8590822926074,
0.0,
976.7466553094133
],
[
0.0,
1283.5006893318534,
520.6437123281569
],
[
0.0,
0.0,
1.0
]
])
R = np.array([
[
0.736828762715704,
0.1290536263233139,
0.6636479005976342
],
[
-0.09833992557804119,
0.9916309806151367,
-0.08364961040894248
],
[
-0.6688891040166153,
-0.0036276462155228617,
0.7433533525254223
]
])
T = np.array([
[
-190.9527690494799
],
[
11.868938400892926
],
[
71.571368261639625
]
])
# two pixel point
rightPoint = (1107,568)
leftPoint = (1840,697)
fLeft = h5py.File('C:\\SMIIP\\camera\\depth_pics\\leftDepth0.h5','r')
fRight = h5py.File('C:\\SMIIP\\camera\\depth_pics\\rightDepth0.h5','r')
d_left = fLeft['depth'][leftPoint[1], leftPoint[0]]
d_right = fRight['depth'][rightPoint[1], rightPoint[0]]
#print(d_left)
#print(d_right)
leftInv = np.linalg.inv(cameraMatLeft)
RightInv = np.linalg.inv(cameraMatRight)
leftPixPoint = np.array([d_left*leftPoint[0], d_left*leftPoint[1], d_left])
rightPixPoint = np.array([d_right*rightPoint[0], d_right*rightPoint[1], d_right])
#compute the camera coordinate
leftResult = np.matmul(leftInv, leftPixPoint)
rightResult = np.matmul(RightInv, rightPixPoint)
leftResult = leftResult.reshape(3, 1)
rightResult = rightResult.reshape(3, 1)
leftRotated = np.matmul(R, leftResult) + T
rightRotated = np.matmul(R, rightResult) + T
print(leftResult,rightResult)
#print(leftRotated, rightRotated)
Am I doing something wrong? Please help me out here. Any help would be much appreciated.