0

I'm trying to RGB-D SLAM with a Kinect sensor.

The Kinect sensor has a rgb camera and a depth camera.

Where should I center my camera for rosbag?

The center of rgb camera? The center of depth camera? In between the two cameras?

And since the two cameras of Kinect are a little bit apart, is the image of the two cameras different?

이다훈
  • 21
  • 1
  • 2

1 Answers1

0

There are 2 cameras and projector in Kinect 1. IR projector is emits IR lights which is sensed by Depth/IR camera/sensor which generates 11 bit Raw output. There is also RGB camera . Because of their positions the depth and color images come from two separate cameras, so they do not aligned or perfectly overlap. It is possible to calculate and register images of 2 cameras using the calibration values of both cameras.

Using raw data value at each pixel in the depth image, we can calculate its position in 3D space and reproject it into the image plane of the RGB camera. In this way we build up a registered depth image, where each pixel is aligned with its counterpart in the RGB image.

OpenNI which is open source library has a builtin registration capability, which uses the Kinect 1 factory calibration. You can use dynamic_reconfigure to change the ROS OpenNI driver's settings at runtime.

MIRMIX
  • 1,052
  • 2
  • 14
  • 40