4

What does the '2' stand for in the following :

SLDataLocator_AndroidSimpleBufferQueue loc_bq   =
{SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};

From what I've read, it is the number of buffers.

Why 2? Why not just 1 ? And if 2 is better, why not 10 then to make it even better?

Thanks

user1884325
  • 2,530
  • 1
  • 30
  • 49
  • 2 is the minimum necessary for overlapping playback with generation (google "double buffering"). More than 2 can be useful in the case where the callback is scheduled irregularly. I have seen example code that uses more than 2 buffers. PortAudio uses more than 2 buffers for a number of native APIs for example. – Ross Bencina Apr 24 '14 at 13:29

1 Answers1

8

Why 2?

If you've got 2 buffers you can fill up one with new data while the other is playing. Additionaly, it just so happens that up until recently you were required to have at least 2 buffers in your buffer queue if you wanted to be able to use Android's low-latency audio path.

Why not just 1 ?

Filling up the buffer with new data becomes quite tricky if you've only got a single buffer, since you risk not being able to generate new data fast enough.

And if 2 is better, why not 10 then to make it even better?

As you increase the number of buffers you also increase the latency (the time from when you enqueue a buffer until when that buffer will be played), assuming that you keep the buffer sizes the same.

Michael
  • 57,169
  • 9
  • 80
  • 125
  • I'm not sure I understand. If the user's openSL speaker callback implementation is called, it is because the audio subsystem is ready for more speaker data. So are you saying that this is not true? Let's take an example with only 1 buffer: [1] speaker callback is called at t = 0. The buffer is filled up with 10ms of audio. [2] speaker callback is called at t = 10ms. The buffer is filled up with 10ms of audio.....etc.....So are you saying that the data written to the buffer in [2] might overwrite data which hasn't been rendered yet (even though openSL calls for more data) ??? – user1884325 Feb 24 '14 at 17:35
  • You'd get the buffer queue callback after your buffer has been consumed by the player, which means that the next data to enqueue has to be immediately ready (and even then I'm not certain if you'll be able to enqeue it before the player wants to start consuming more buffer data). If you had had 2 or more buffers in your buffer queue there would still be 1 or more buffers left in your queue, which leaves you some time to generate and enqueue another buffer. – Michael Feb 24 '14 at 17:49
  • You say that I get the buffer queue callback after my buffer has been consumed, and then - at the same time - you say that you're not sure if I will be able to enqueue data before the player wants to start consuming more data. This doesn't make sense. If you get the callback the player _is_ ready to consume more data, right? So if data is guaranteed to be available when the callback is called, it must be sufficient to have just one buffer; i.e. callback is called, data is enqueued, and the next time the callback is called the enqueued data has been consumed so you enqueue the next data – user1884325 Feb 24 '14 at 18:26
  • _"it must be sufficient to have just one buffer"_ All I'm saying is that I'm not certain of it (as nothing is instantaneous). You can test it and see if it's robust enough for your application. – Michael Feb 24 '14 at 19:00
  • Consider the zero-copy case: usually the situation with double buffering is that one buffer plays while the other is being filled (similar to frame buffer swapping for graphics). With only one buffer (that is either being played, or being filled), there is no way to present a continuous stream of audio to the output. – Ross Bencina Apr 24 '14 at 13:28
  • @Ross: I understand your point and I should probably use 2 buffers instead of 1. ....However, wouldn't it be audible and thus evident if the 1-buffer scheme didn't work? I mean...You would hear all kinds of "noise" in the rendered audio then..right? – user1884325 Apr 24 '14 at 15:14
  • @user1884325 one reason that a single buffer might work would be if the OpenSL implementation that you're using is doing its own internal buffering and "reading ahead" on your data. Can you be sure that all implementations will do that? If all you care about is the system that you're testing on, then you're golden. The problem is knowing whether it will work on other systems. Also note that with a single buffer it's either with your code, or with OpenSL -- there's no scope for overlapping operations, so you are constrained to only use whatever CPU time is left once OpenSL has done it's thing. – Ross Bencina Apr 24 '14 at 18:46
  • 1 Buffer scheme is ok if you just playing audio and preparing it fast enough, but do not do recording at the same time. But sh*t happens when recording and audio playback is done simultaneously. The problem lies somewhere inside the audio subsystem. For a real-time app, I suggest using 2 or 3 buffers – sancheese Jan 14 '19 at 08:02