1

I am looking for transformation matrix to convert color space to camera space.

I know that the point conversion can be done using CoordinateMapper but I am not using Kinect v2 official APIs.

I really appreciate if someone can share the transformation matrix, which can convert color space to camera space.

As always, thank you very much.

ravi
  • 6,140
  • 18
  • 77
  • 154

1 Answers1

2

Important : The raw kinect RGB image has a distortion. Remove it first.

Short answer

The "transformation matrix" you are searching is called projection matrix.

rgb.cx:959.5
rgb.cy:539.5
rgb.fx:1081.37
rgb.fy:1081.37

Long answer

First understand how color image is generated in Kinect.

X, Y, Z : coordinates of the given point in a coordinate space where kinect sensor is consider as the origin. AKA camera space. Note that camera space is 3D.

u, v : Coordinates of the corresponding color pixel in color space. Note that color space is 2D.

fx , fy : Focal length

cx, cy : principal points (you can consider the principal points of the kinect RGB camera as the center of image)

(R|t) : Extrinsic camera matrix. In kinect this one you can consider as (I|0) where I is identity matrix.

s : scaler value. you can set it to 1.

To get the most accurate values for the fx , fy, cx, cy, you need to calibrate your rgb camera in kinect using a chess board.

The above fx , fy, cx, cy values are my own calibration of my kinect. These values are differ from one kinect to another in very small margin.

More info and implementaion

All Kinect camera matrix

Distort

Registration

I implemented the Registration process in CUDA since CPU is not fast enough to process that much of data (1920 x 1080 x 30 matrix calculations per second) in real-time.

Shanil Fernando
  • 1,302
  • 11
  • 13
  • Thanks and sorry for delayed response. I have one more question. The [link](https://github.com/shanilfernando/VRInteraction/tree/master/calibration) contains 4 calibration files i.e., _camera_param.yaml_, _depth_calibration.yaml_, _pose_calibration.yaml_ and _rgb_calibration.yaml_. At present, I am using AR marker library in ROS, which requires [sensor_msgs/CameraInfo](http://docs.ros.org/api/sensor_msgs/html/msg/CameraInfo.html) along with RGB image acquired from Kinect in order to calculate 3D position of AR marker. – ravi Jan 26 '18 at 09:58
  • In `sensor_msgs/CameraInfo` the distortion model is `plumb_bob`. Intrinsic camera matrix `K` consists of `fx`, `fy`, `cx` and `cy`, which you have already shared. So no worries! But how to get other parameters such as distortion parameters `D` consisting of 5 parameters are `k1`, `k2`, `t1`, `t2`, `k3` and others? – ravi Jan 26 '18 at 10:11
  • Thanks for the answer Shanil. Can you explain please how to take inverse of 3x4 matrix? I just read about ["extrinsic matrices"](http://ksimek.github.io/2012/08/22/extrinsic/) but I'm not sure how to deal with it programmatically (python) – Alaa M. Aug 21 '19 at 19:27