I am trying to use the OSD dataset link: http://www.acin.tuwien.ac.at/forschung/v4r/software-tools/osd/ which uses kinect v1 for generating the Depth map images. I have read that the values should range from 0-2048 in the depth map images and the unit for these values are in mm. However, when i try using the depth map images from the dataset mentioned above, i get values more than 2048 (I get in the region of 0-5200, i am using opencv for reading these images)
original_image
Depth Image
minimum disparity value: 0
maximum disparity value: 3475
code that i have used to normlized this image so that i can see some meaningful gray depth image
img_depth = cv2.imread("depth_map.png",-1)
depth_array = np.array(img_depth, dtype=np.float32)
frame = cv2.normalize(depth_array, depth_array, 0, 1, cv2.NORM_MINMAX)
cv2.imwrite('capture_depth.png',frame*255)
I have a set of questions:
Why do we have values above 2048?
Why do we have a black region at the sides of the image (My guess was that the RGB camera and the laser sensor are at different angle so this translation was required so that are mapped properly, however, i am not sure as i have tried with different RGBD datasets and these black regions at the edges occurs differently)
What are the best ways to fill these black holes in the center of the image? (My understanding with these black holes is that they occur due to depth been not measured at those points)
I wanted to generate stereo pairs using RGB and depth image, how best i can do it?(currently i am using Triaxes StereoTracer for generating stereo images)