2

I'm building an app that will generate sound (for now it's mostly experimental) and play it on an Android phone.

For now I'm trying to play a simple sinewave sound (440 Hz), and first tried with an Audiotrack but experienced some buffer underrun. So I decided to take a look at OpenSL.

Now I've read lots of tutorial and blog posts on this, and finally made my own implementation, using an OpenSL Engine with an Android Simple Buffer Queue.

Now in the buffer callback, I generate a new buffer data and add it to the queue, but then the latency is much worse than the audio track (I can hear gaps between each buffers).

My question is, what is the best practice / architecture for generated sounds in OpenSL ? Should I fill the buffer in an alternative thread (then needing some synchronization process with the buffer callback)?

I've not found yet tutorials on OpenSL ES for generated sounds (most are on playing audio files or redirecting audio input to audio output).

XGouchet
  • 10,002
  • 10
  • 48
  • 83
  • _"the latency is much worse than the audio track (I can hear gaps between each buffers)"_. Sounds to me like you're describing a buffer underrun (buffers are being enqueued without having been filled up completely). Unless you're stopping and restarting your player object between each buffer there should be no latency in between buffers; only the initial one when you first start the player object. As for your question; I simply use the buffer queue callback to enqueue the next buffer, but I had to try different buffer sizes before I found one that worked. – Michael Jan 27 '14 at 09:40

1 Answers1

4

Regarding the latency : it is important to choose the right sampling rate and buffer size for your device. You can query the device for the recommended values by using the Android SDK's AudioManager (PROPERTY_OUTPUT_SAMPLE_RATE and PROPERTY_OUTPUT_FRAMES_PER_BUFFER are only available from API level 17) and pass the values onto the NDK application :

// getting the samplerate and buffer size
if ( android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1 )
{
    AudioManager am = ( AudioManager ) aContext.getSystemService( Context.AUDIO_SERVICE );
    int sampleRate = Integer.parseInt( am.getProperty( AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE ));
    int bufferSize = Integer.parseInt( am.getProperty( AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER ));
}

The importance of getting the sample rate right is that if it differs from the devices preferred sample rate (some use 48 kHz, others 44.1 kHz) the audio is routed past a system resampler before it is being output by the hardware, adding to the overall latency. Additionally, the importance of getting the right buffer size is to prevent samples/frames dropping after several buffer callbacks, which might lead to the problem you describe where gaps / glitches occur between callbacks. You can use multiples (power of 2 ) to decrease / increase the buffer size for experimenting with a more stable engine (higher buffer size) and faster response (lower buffer size).

Having created some simple Android apps doing exactly this, I've written a small write-up explaining the above recommendation in slightly more detail along with how a basic sequenced engine for music related applications could be constructed, however the page is just a basic architecture outline, and might be completely useless depending on your needs > Android audio engine in OpenSL

jww
  • 97,681
  • 90
  • 411
  • 885
Igor Zinken
  • 884
  • 5
  • 19
  • i dont understand, google advertises Oboe as api 16+ but yet AudioManager is a must for OpenSL and requires API 17. I am confused. My minimum sdk is api16 but now do i have to make it 17? i dont wanna lose 0.6% share of the market, any workarounds? – cs guy Nov 07 '20 at 15:44
  • 1
    AudioManager is actually available from API 1, however the properties for sample rate and samples per buffer were only added in API 17.OpenSL actually works from API 10. What you can do for users < API 17 is default the sample rate (48 kHz is a good bet) and take a buffer size of 512 samples. If there is a mismatch between sample rate and device sample rate there will be some added latency as the system will force the audio stream to go through a resampler. If the buffer size is not a multiple of the native size it might lead to a less than optimal render cycle. Better than nothing though! – Igor Zinken Nov 07 '20 at 17:44
  • sounds very reasonable, thank you for this. i think i will set my API as 16 and try this – cs guy Nov 07 '20 at 19:43