I have made an application which shows the picture from the Kinect camera. I also have a speech recognition method. I'm trying to add functionality to do an action depending on the hand gesture (open/closed). I've learned that the InteractionStream
can give me information about this.
I have tried implementing it, but when I run the application, the first lens from the left doesn't work (I don't even see the red light from there, and there is no image on the screen.)
When I comment all code connected with interactionStream
(implementation/sending frames etc.), the camera works properly. The light shines and I can see my sad face.
Is possible that these streams work at the same time? Is this just a limitation of the Kinect?