0

I am going to set up my RGBD camera with Panasonic LUMIX GH5 and azure Kinect like a depthkit cinema.

The depthkit does not provide raw depth data but the obj sequence files. I require the depth buffer which aligns with an RGB image.

So I started to write the software for it. (I have fixed my Panasonic LUMIX GH5 camera and azure Kinect with SmallRig fixture.)

After getting the extrinsic param of the GH5 and Azure Kinect RGB sensor with OpenCV's solvePnP function, how can I use them to aligning the GH5 colour image with the Azure Kinect Depth image?

Or should I do another approach to accomplish this?

I can't find any idea or resources for this issue.

In the Azure Kinect documentation, I found "k4a_transformation_depth_image_to_color_camera_custom" function in Azure Kinect SDK.

Is this method useful for my case? And if true, how can I get the k4a_transformation_t value for its parameter?

horristic
  • 9
  • 2
  • Depth to Color mapping is the recommended method. There is a sample for .NET (C#) here: https://github.com/microsoft/Azure-Kinect-Samples/tree/master/build2019/csharp/2%20-%20TransformDepthToColor I am not familiar with DepthKit, but I am working with the Azure Kinect, and am using similar functionality to "align" the images. Now, I am not sure about alignment with an *external source* though - this function, to my best understanding aligns Kinect's depth image to Kinect's RGB image. Perhaps you can then use OpenCV to reconcile between the Kinect's RGB image and your external camera's one. – CoolBots Oct 30 '20 at 23:46
  • Thank you for the comment. That would be the C# version of the transformation of depth to the color. For use of the function, we will need the transformation information which I think I can obtain from openCV camera pose estimation. I suppose i can get my camera pose with regard to Kinect RGB camera. but I think what i need is the pose with regard to Kinect depth camera. And I am not sure how to get it. – horristic Nov 03 '20 at 18:13

0 Answers0