I have a tracked robot with an iPhone for brains. It has two cameras - one from iPhone, another a standalone camera. The phone has GPS, gyroscope, accelerometer and magnetometer with sensor fusion separating user-induced acceleration from gravity. Together the phone can detect it's own attitude in space. I would like to teach the robot to at least avoid walls if the robot bumped into them before.
Can anyone suggest a starter project for Visual Simultaneous Location and Mapping done by such robot? I would be very grateful for an objective-c implementation, as I see some projects written in C/C++ at OpenSlam.org, but those are not my strongest programming languages.
I do not have access to laser rangefinders, so any articles, keywords or scholarly papers on Visual SLAM would also help.
Thank you for your input!