5

Currently I'm working on a C#-project to merge color and depth information that can be retrieved from the Kinect-v2 device by using the Kinect SDK, with information that can be gathered from 3-dimensional object-files (.obj, .stl, and so on..). As a result I'd like to create a pixel-pointcloud that contains the information of both - the coordinates and color information from the Kinect device aswell as the information from the object file.

What I've done so far?

  • I managed to set up two arrays of pixels Pixel[] that hold the needed data (X/Y/Z-coordinates, as well as the regarding color-information RGB).
  • One array contains the Kinect-data while the other one contains the
    data of the object-file.

    public class Pixel 
    {
         public double X { get; set; }
         public double Y { get; set; }
         public double Z { get; set; }
         public byte R { get; set; }
         public byte G { get; set; }
         public byte B { get; set; }
    }
    

To get color and depth information and to set up the Kinect-array, I first used the Kinect-SDK's MultiSourceFrameReader class to aquire a MultiSourceFrame ..

MultiSourceFrame multiSourceFrame = e.FrameReference.AcquireFrame();

.. so that I can aquire the color/depth frames right after that:

DepthFrame depthFrame = multiSourceFrame.DepthFrameReference.AcquireFrame();
ColorFrame colorFrame = multiSourceFrame.ColorFrameReference.AcquireFrame();

With the methods .CopyConvertedFrameDataToArray() and .CopyFrameDataToArray() I get the frame data for RGB and depth information, which then are mapped to a CameraSpacePoint[]array with the CoordinateMapper's .MapColorPointsToCameraSpace() method to extract the X/Y/Z-coordinates within the camera-space. With this array and the frame data I can set the Pixel[] array that holds all information of the "Kinect-pointcloud".

To set up the "pointcloud" for the 3D-object I used a 3rd-party library to load the object and to extract its X/Y/Z-coordinates and color information.

Next step
My next step would be to merge both of these pointclouds but thats were I'm stuck. I somehow would have to map or resize the 3D-information (X/Y/Z) of the one pointcloud so that it fits to the 3D-information of the other pointcloud as they are both scaled differently. This leads to my questions:

My Questions

  • How can I scale the object pointcloud to make it fit into the Kinect-pointcloud?
  • How do I find out which pixel of the object overwrites a pixel within the Kinect-pointcloud? (The resulting pointcloud should not change its size of 1920*1080 pixels)
  • Are there any libraries capable of doing that?

Additional Information
A coordinate within the Kinect-pointcloud looks like:
- X: 1.4294..
- Y: 0.9721..
- Z: 2.1650..


A coordinate within the 3D-object looks like:
- X: 0.8331..
- Y: -16.0591..
- Z: 26.8001..

oRole
  • 1,316
  • 1
  • 8
  • 24

0 Answers0