similar questions are solved many times. However, they generally maps depth coordinates to RGB coordinates, by following the next steps:
- apply the inverse depth intrinsic matrix to the depth coordinates.
- rotate and translate the 3d coordinates obtained using the rotation R and T matrixes that maps 3d depth coordinates to 3D RGB coordinates.
- apply the RGB intrinsic matrix to obtain the image coordinates.
However, I want to do the reverse process. From a RGB coordinates obtain the depth coordinates. Then I can obtain an interpolated value from the depth map based on those coordinates.
The problem is that I don't know how can I define the z coordinate in the RGB image to make everything works.
The process should be:
- obtain 3D RGB coordinates by applying the camera's inverse intrinsic matrix. How can I set the z coordinates? Should I define and estimated value? Set all the z coordinates to one?
- rotate and translate the 3D RGB coordinates to the 3d coordinates.
- apply the depth intrinsic matrix.
If this process cannot be done. How can I map RGB coordinates to depth coordinates instead of the other way around?
Thank you!