I am going to set up my RGBD camera with Panasonic LUMIX GH5 and azure Kinect like a depthkit cinema.
The depthkit does not provide raw depth data but the obj sequence files. I require the depth buffer which aligns with an RGB image.
So I started to write the software for it. (I have fixed my Panasonic LUMIX GH5 camera and azure Kinect with SmallRig fixture.)
After getting the extrinsic param of the GH5 and Azure Kinect RGB sensor with OpenCV's solvePnP function, how can I use them to aligning the GH5 colour image with the Azure Kinect Depth image?
Or should I do another approach to accomplish this?
I can't find any idea or resources for this issue.
In the Azure Kinect documentation, I found "k4a_transformation_depth_image_to_color_camera_custom" function in Azure Kinect SDK.
Is this method useful for my case? And if true, how can I get the k4a_transformation_t value for its parameter?