1

similar questions are solved many times. However, they generally maps depth coordinates to RGB coordinates, by following the next steps:

  1. apply the inverse depth intrinsic matrix to the depth coordinates.
  2. rotate and translate the 3d coordinates obtained using the rotation R and T matrixes that maps 3d depth coordinates to 3D RGB coordinates.
  3. apply the RGB intrinsic matrix to obtain the image coordinates.

However, I want to do the reverse process. From a RGB coordinates obtain the depth coordinates. Then I can obtain an interpolated value from the depth map based on those coordinates.

The problem is that I don't know how can I define the z coordinate in the RGB image to make everything works.

The process should be:

  1. obtain 3D RGB coordinates by applying the camera's inverse intrinsic matrix. How can I set the z coordinates? Should I define and estimated value? Set all the z coordinates to one?
  2. rotate and translate the 3D RGB coordinates to the 3d coordinates.
  3. apply the depth intrinsic matrix.

If this process cannot be done. How can I map RGB coordinates to depth coordinates instead of the other way around?

Thank you!

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
CristoJV
  • 490
  • 6
  • 15

0 Answers0