I am working on a line-following robot that works with computer vision. The image processing is done on an iPhone using OpenCV. For controlling the robot I have two ideas:
- Generate sound of given frequencies (e.g. the higher the frequency, the more the robot should move to the left; the lower the frequency, the more the robot should move to the right). I have already done this successfully on an Android device. I found this code: Produce sounds of different frequencies in Swift, however I do not understand how I can play a sound indefinitely until a new frequency is given. Is this possible with this code, if yes, how?
- If it would be possible (which I don't know) to precisely control the output waves of the sound in stereo (one channel for the left motor, one channel for the right motor), I could in theory directly control the motor driver from those 'sound' waves. Would this be possible or is this too complicated?
Note that I would like to avoid to use wireless communication such as a bluetooth or wifi module, since the environment in which the robot will be used will be used will have lots of possible interference.
Thank you in advance!