I'm building an app that will generate sound (for now it's mostly experimental) and play it on an Android phone.
For now I'm trying to play a simple sinewave sound (440 Hz), and first tried with an Audiotrack but experienced some buffer underrun. So I decided to take a look at OpenSL.
Now I've read lots of tutorial and blog posts on this, and finally made my own implementation, using an OpenSL Engine with an Android Simple Buffer Queue.
Now in the buffer callback, I generate a new buffer data and add it to the queue, but then the latency is much worse than the audio track (I can hear gaps between each buffers).
My question is, what is the best practice / architecture for generated sounds in OpenSL ? Should I fill the buffer in an alternative thread (then needing some synchronization process with the buffer callback)?
I've not found yet tutorials on OpenSL ES for generated sounds (most are on playing audio files or redirecting audio input to audio output).