I am working on a project and I need to give a small humanoid robot (a Nao bot) depth perception. I am planning on wiring in a Kinect to the bot's forehead and integrating it with the robot's current operating and guidance system (the default system called OPEN NAO) which runs on Linux and relays to the bot with wifi.
Right now I am fumbling over which software to use. I have looked at the Point Cloud Library which I see is for processing of the actual data, OpenNI which is defined as an API framework to help applications access natural interaction devices such as the Kinect, and then there is the official Kinect SDK. I'm just not sure how they all fit together.
Which of these libraries/frameworks do I need to integrate Kinect into the robot's operating system?