I have a RGB image and 8-bit depth data of the same scene from two different cameras and need to generate point cloud in 3D space.
I used this paper to understand how to do it.
As mentioned in Eq. 1 we can find non-homogeneous coordinates of the world point w.r.t a pixel in the image as follows,
Eq. 5 shows how to find the lamda1 as follows,
Then to find the real depth values based on the 8-bit depth values in the depth image, I used the following Eq. 4 from this paper,
So, I expected two point clouds generated from the data from two cameras to align with each other since I used the projection matrices, but the points clouds does not coincide with each other and generated awkward results. Could some kindly point out where is the problem is?