0

Kinect V2 color stream supported format is : 1920x1080. But kinect V2 depth stream format is : 512x424. Now when I start live steam for both sensors then they have different sizes because of different resolution. I cant resize them, because I need coordinates . so when I resize using Imresize(),the coordinates are not matched. I already read matlab documentation.They said hardware only supports this two format respectively.Now How can i do this in code so that both stream have the same resolution. I tried two days long but failed.Moreover, I want to do it by any process so that i take first depth image and based on this depth resolution it will take RGB or color image.

My project is I take the line from depth image and map them on RGB image of kinect v2. but there resolution are not same. so [x,y] cordinates changed. so when I map it on RGB it not matched with the coordintes of depth image. how can i solve it ?. I thought i will change the resolution but in kinect V2 resoution cant change.Now how can i do it in coding.

Here is link who did like this.i want to do it in matlab or c#

  • 1
    can you post some examples of what you have tried and why it failed? – RemedialBear Jul 01 '17 at 16:55
  • added some info. but i dont understand how can i do coding for it – Mohammad Yakub Jul 01 '17 at 17:17
  • You cannot resize the RGB image to match the size of the depth image. The only way is to use coordinate mapping. For a matlab implementation, you can check [ToF-Calibration Toolbox] (https://github.com/kapibara/ToF-Calibration) – Atif Anwer Jul 02 '17 at 04:06

3 Answers3

2

For working example, you can check VRInteraction. I map depth image to RGB image to build up 3D point cloud.

What you want to achieved is called Registration.

  1. Calibrate Depth camera to find the Depth camera projection matrix (Using opencv)
  2. Calibrate RGB camera to find the RGB camera projection matrix (Using opencv)

    - You can register Depth image to RGB image:

Which is mapping the corresponding RGB pixel of the given Depth image. This will end up with a resolution of 1920x1080 RGB-Depth image. Not all the RGB pixels will have a depth value since there are less depth pixels. For this you need to

  • calculate real world ordinates() of each depth pixel using Depth camera projection matrix
  • calculate the coordinates of the RGB pixel of that previosly calculated real world ordinates
  • Find the matching pixel in the RGB image using previously calculated coordinates of the RGB pixel

    - You can register RGB image image to Depth image:

Which is mapping the corresponding Depth pixel of the given RGB image. This will end up with a resolution of 512x424 RGB-Depth image. For this you need to

  • calculate real world ordinates() of each RGB pixel using Depth camera projection matrix
  • calculate the coordinates of the depth pixel of that previously calculated real world ordinates
  • Find the matching pixel in the depth image using previously calculated coordinates of the RGB pixel

If you want to achieved this in real-time, you will need to consider about using GPU accelerating. Specially if your Depth image contains more than 30000 depth points.

I wrote my masters theses on this matter. if you have more questions, I'm more than happy to help you.

Shanil Fernando
  • 1,302
  • 11
  • 13
  • Thanks for your suggestion. I already do this some how. but now i face a problem, sometimes i get zero value for depth image pixel. how can i rid this problem ? Have u any idea about it ? – Mohammad Yakub Jul 04 '17 at 15:57
  • The depth range of the kinect v2 is between 0.5m and 4m and FOV is 70.6 x 60 degrees. If you get zero depth values means the sensor can't read that specific point. In other words that point is not in sensor's range or sensor can't read it clearly. So you can safely ignore those points or assign a default value as your requirements. – Shanil Fernando Jul 10 '17 at 19:34
  • Thanks sir... sir if i map depth to RGB using VR... then coordinates will exactly same for both images in a specific point ?? how can i map it ... just run VR ? but sir i failed to run it. – Mohammad Yakub Jul 11 '17 at 14:26
1

In c# you can use the CoordinateMapper to map points from one space to another. So to map from depth space to color space you hook up to the MultiSourceFrameArrived event for color and depth source and create a handler like this

  private void MultiFrameReader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
  {
        MultiSourceFrame multiSourceFrame = e.FrameReference.AcquireFrame();
        if (multiSourceFrame == null)
        {
            return;
        }


        using (ColorFrame colorFrame = multiSourceFrame.ColorFrameReference.AcquireFrame())
        {
            if (colorFrame == null) return;

            using (DepthFrame depthFrame = multiSourceFrame.DepthFrameReference.AcquireFrame())
            {
                if (colorFrame == null) return;

                using (KinectBuffer buffer = depthFrame.LockImageBuffer())
                {
                    ColorSpacePoint[] colorspacePoints = new ColorSpacePoint[depthFrame.FrameDescription.Width * depthFrame.FrameDescription.Height];
                    kinectSensor.CoordinateMapper.MapDepthFrameToColorSpaceUsingIntPtr(buffer.UnderlyingBuffer, buffer.Size, colorspacePoints);
                    //A depth point that we want the corresponding color point
                    DepthSpacePoint depthPoint = new DepthSpacePoint() { X=250, Y=250};

                    //The corrseponding color point
                    ColorSpacePoint targetPoint = colorspacePoints[(int)(depthPoint.Y * depthFrame.FrameDescription.Height + depthPoint.X)];

                }
            }
        }  
    }

The colorspacePoints array contains for each pixel in the depthFrame the corresponding point in the colorFrame You should also check if the targetPoint has X or Y infinity, that means that there is no corresponding pixel in the target space

buluba89
  • 776
  • 8
  • 6
0

You will need to resample (imresize in Matlab) if you want to overlay both arrays (e.g. to create an RGBD image). Note that the field of view is different on depth and color, i.e. the far right and left of the color image is not part of the depth image, and the top and bottom of the depth image are not part of the color image.

Consequently, you should

  1. crop color image in width to depth image
  2. crop depth image in height to color image
  3. resample either color or depth image using imresize
Jonas
  • 74,690
  • 10
  • 137
  • 177