0

I am new to Kinect. I used to see on the Internet that the joint depth camera and color camera calibration of a Kinect has been finished when it goes out of factories. So what's the meaning of the calibration for a second time? On the other hand, as we all know that there are already some applications such as many somatosensory games that use Kinect, how do these applications finish calibration? It seems impossible that game players uses all kinds of calibration algorithms to get it done. Thanks!

supernova
  • 1
  • 3

2 Answers2

1

The kinect has indeed an internal calibration. In your software you can therefore use the coordinateMapper functions to go from xyz world coordinates to uv color image coordinates.

Now imagine the following, you send your xyz data and an color image to someone else (or different computer for later processing). As you have no contact to the kinect anymore, you can't ask the coordinateMapper how to relate the 2 dataset.

This is why some people do their own kinect calibration. Because now the calibration parameters are available and can be shipped with the xyz and uv data.

That said, if you don't need this, stick to the coordinateMapper! The calibration of the kinect is not that easy to realize.

Deepfreeze
  • 1,755
  • 1
  • 13
  • 18
  • Thanks!But I am still a little confused. In your words, it seems the methods for calibration of depth and color camera have developed maturely. So that is the problem, why are there so many papers which are researching on the calibration? Thanks again! – supernova Jul 15 '15 at 11:00
0

Supernova, user @Deepfreeze made a really good explanation. You should stick to the coordinateMapper for calibration.

The reason for this on Kinect v2 is that the image resolution isn't the same! That's why you really need to save this information to file. All this features are now luckily integrated in the Kinect SDK and can be easily accessed. Also you can find the information about it in the Kinect MSDN page.

16per9
  • 502
  • 4
  • 17