3

I'm trying to stream Kinect video data (just the image, not depth/infared) but I find the default buffer size on the image is very large (1228800) and incapable of sending over a network. I was wondering if there was any way of getting access to a smaller array without having to go down the route of codec compression. Here's is how I declare the Kinect which I took from a Microsoft sample;

// Turn on the color stream to receive color frames
this.sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);

// Allocate space to put the pixels we'll receive
this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength];

// This is the bitmap we'll display on-screen
this.colorBitmap = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, 
this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null);

// Set the image we display to point to the bitmap where we'll put the image data
this.kinectVideo.Source = this.colorBitmap;

// Add an event handler to be called whenever there is new color frame data
this.sensor.ColorFrameReady += this.SensorColorFrameReady;

// Start the sensor!
this.sensor.Start();

And here is the New Frame event which I then try to send each frame;

    private void SensorColorFrameReady(object sender, 
ColorImageFrameReadyEventArgs e)
    {
        using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
        {
            if (colorFrame != null)
            {
                // Copy the pixel data from the image to a temporary array
                colorFrame.CopyPixelDataTo(this.colorPixels);

                // Write the pixel data into our bitmap
                this.colorBitmap.WritePixels(
                    new Int32Rect(0, 0, this.colorBitmap.PixelWidth, 
this.colorBitmap.PixelHeight),
                    this.colorPixels,
                    this.colorBitmap.PixelWidth * sizeof(int),
                    0);

                if (NetworkStreamEnabled)
                {
                networkStream.Write(this.colorPixels, 0, 
                             this.colorPixels.GetLength(0));
                }
            }
        }
    }

UPDATE

I'm using the following two methods to convert the ImageFrame to a Bitmap and then the Bitmap to a Byte[]. This has brought the buffer size down to ~730600. Still not enough but progress. (Source: Convert Kinect ColorImageFrame to Bitmap)

public static byte[] ImageToByte(Image img)
    {
        ImageConverter converter = new ImageConverter();
        return (byte[])converter.ConvertTo(img, typeof(byte[]));
    }

    Bitmap ImageToBitmap(ColorImageFrame Image)
    {
        byte[] pixeldata = new byte[Image.PixelDataLength];
        Image.CopyPixelDataTo(pixeldata);
        Bitmap bmap = new Bitmap(Image.Width, Image.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
        BitmapData bmapdata = bmap.LockBits(
            new Rectangle(0, 0, Image.Width, Image.Height),
            ImageLockMode.WriteOnly,
            bmap.PixelFormat);
        IntPtr ptr = bmapdata.Scan0;
        Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
        bmap.UnlockBits(bmapdata);
        return bmap;
    }
Community
  • 1
  • 1
windowsgm
  • 1,566
  • 4
  • 23
  • 55
  • What do you mean by "incapable" exactly? It's perfectly possible to stream large amounts of data. What's going wrong? And what "smaller array" would you expect to have the data without compression? – Jon Skeet Jun 05 '13 at 19:10
  • When I try to send with that size of a buffer my application hangs without a crash, and that's running both clients on the same machine. By smaller array, I mean is is the Kinect trying to make me push useless data (e.g. depth data)? Because the quality on the camera does not look good enough to require a buffer of that size. – windowsgm Jun 05 '13 at 19:19
  • 1
    640 columns * 480 rows * 4 bytes per pixel (BGR32) = 1,228,800 bytes per frame – lnmx Jun 05 '13 at 19:32
  • @Inmx Would BGR24 be a better format for streaming? – windowsgm Jun 05 '13 at 19:36
  • 1
    Have you seen [Kinect Service](http://kinectservice.codeplex.com/)? – lnmx Jun 05 '13 at 19:40

1 Answers1

0

My recommendation would be to store the colorframe in a bitmap, then send those files over the network and reassemble them in a video program. A project I've been doing with the Kinect does this:

//Save to file
                if (skeletonFrame != null)
                {
                    RenderTargetBitmap bmp = new RenderTargetBitmap(800, 600, 96, 96, PixelFormats.Pbgra32);
                    bmp.Render(window.image);

                    JpegBitmapEncoder encoder = new JpegBitmapEncoder();
                    // create frame from the writable bitmap and add to encoder
                    if (skeletonFrame.Timestamp - lastTime > 90)
                    {
                        encoder.Frames.Add(BitmapFrame.Create(bmp));
                        string myPhotos = Environment.GetFolderPath(Environment.SpecialFolder.MyPictures);
                        string path = "C:your\\directory\\here" + skeletonFrame.Timestamp + ".jpg";
                        using (FileStream fs = new FileStream(path, FileMode.Create))
                        {
                            encoder.Save(fs);
                        }
                        lastTime = skeletonFrame.Timestamp;
                    }
                }

Of course, if you need this to be in real time, you're not going to like this solution, and I think my "comment" button is gone after the bounty.

nerdenator
  • 1,265
  • 2
  • 18
  • 35