0

I have been trying the OpenSL for a few weeks now. I'm trying to access to the buffer while playing a file on sdcard via SL_DATALOCATOR_URI as a source. I want to write a few effects of my own and need the buffer.

Currently in the code, I'm creating two audio players. One is reading the file to buffer, other is writing the buffer to the output. When I'm testing the code with the microphone (recorder), everything is fine. Sound in-out works as expected.

When I switch the recorder with a uri audioplayer, queue goes haywire. Streaming is not listening to thread locks (it occurs async as I understand) and buffer calls are not fired correctly, time flies away.

I've put logs to every method, so the result appears something like this:

V/PDecoder( 1292): Position : 15023
V/PDecoder( 1292): Position : 16044
V/PDecoder( 1292): Position : 17043
V/PDecoder Native PL1( 1292): bqPlayerCallback
V/PDecoder Native PL1( 1292): Notify thread lock
V/PDecoder Native PL1( 1292): android_AudioIn 32768
V/PDecoder Native PL1( 1292): Wait thread lock
V/PDecoder Native PL1( 1292): android_AudioOut 32768
V/PDecoder Native PL1( 1292): android_AudioIn 32768
V/PDecoder Native PL1( 1292): android_AudioOut 32768
V/PDecoder Native PL1( 1292): Wait thread lock
V/PDecoder Native PL1( 1292): bqRecorderCallback
V/PDecoder Native PL1( 1292): Notify thread lock
V/PDecoder( 1708): Position : 18041
V/PDecoder( 1708): Position : 19040
V/PDecoder( 1708): Position : 20038

Seconds fly away before queue callbacks are even fired.

So the question is, how can I correct this problem? Is there a way for audioplayer > buffer > output solution for uri playback? What am I doing wrong? If someone can point me to the right direction, it is greatly appreciated.

The code is a little long for pasting here, so here are the gists

emrahgunduz
  • 1,404
  • 1
  • 13
  • 26

3 Answers3

1

After loosing myself in the code I gave in the question, decided to write it again, as clean as possible.

I found out that I were not locking the uri player after all. I'm adding the working final code at the end of the answer. The code is good for playing local file or url, but needs to be run in a thread started from java, or you'll lock the gui thread.

PS. The buffer is using stack, so you might want to move it to the heap, and probably save the pointer in the struct. Also, play, pause, destroy methods are not finished. If you want to use the code, you can easily implement these functions.

Bonus. The code also includes a simple way to call java instance methods (without the dreaded *env sent from java part). If you need it, look at JNI_OnLoad, then playStatusCallback() and then callPositionChanged() methods.

The code is a little long for pasting here, so here are the gists

emrahgunduz
  • 1,404
  • 1
  • 13
  • 26
0

Emrah, this is the precise problem I've been having for my project right now. I've been following this blog:

http://audioprograming.wordpress.com/2012/10/29/lock-free-audio-io-with-opensl-es-on-android/

which is the circular buffer implementation of this, from the same blog:

http://audioprograming.wordpress.com/2012/03/03/android-audio-streaming-with-opensl-es-and-the-ndk/

In any case, upon studying the code it looks like he has his versions of your opensl-native h and c files, named opensl_io. He also has another class, opensl_example, that has an inbuffer and an outbuffer with a bit of simple processing in between. It seems like his recorder object fills the inbuffer of this opensl_example class, and his outbuffer populates his audioplayer object to play to sink. From what it sounds like, you were doing that as well.

Basically, I'm trying to replace the recorder object with an input stream from file, since I have to have access to chunks of buffer from the file if I want to process each chunk differently during stream, for example. you are using SLDATA_locator from the utf8 converted URI, which I'm trying to do now but I'm not exactly sure how to get a stream from it.

Right now, how the blog example works is that it takes the audio input from the recorder object in stream, puts it to the circular buffers as they fill, and put it through processing to output. I'm trying to just replace the source of the recorder buffers with my chunks of buffer from mp3. Again, it sounds like your code does precisely that. The audioprogramming blog's example is particularly complicated to alter to me because I'm not entirely sure with how SWIG works. But since you're using JNI, it might be easier.

Can you advise me in how yours works? Do you simply call StartPDecoderNative and then DecodeOn from Java with the uri string as a parameter?

B.C
  • 1
  • 2
  • You can check the code in accepted answer. I create two players, a buffer player and a URI player. URI player fills its own buffer by reading a file (or url if you like) and calls the queue callback, then I push the buffer to the output player. It's not too hard, because of the opensl boilerplate you can get lost. I'm calling StartPDecoderNative with a file uri (ex: file:///sdcard/file.mp3). Then I'm calling DecodeOn() which inits all required objects. Call these methods from java part, and it will play. – emrahgunduz Jul 29 '14 at 20:54
  • I converted my own code for the project I'm working on and now I'm able to do anything on the buffer, I've already written high/low cut filters, amplifiers, a 10+ band eq, crossfade effects, fft and spectrum analyzer etc... you can directly access the buffer and do whatever you desire. But move it to heap, as the code in the answer is working on stack and it is really slow. – emrahgunduz Jul 29 '14 at 21:07
0

Ok, tried running the c and h code with a simple java mainactivity that runs both those functions, in that order, on a button click.

Also, it looks like you also need a positionchanged method in java. What are you running in there? I can comment the part with jmethod out and the music plays, so that's working. Is it for seek?

Finally, maybe I'm just having a bit of trouble understanding it but which buffer are you doing the processing on, and where does it reside? Is it outbuffer? If I just wanted to, say, apply an fft or more simply just a scalar multiplication to the output sound, would I just multiply it to outbuffer before playing it out the final sink?

B.C
  • 1
  • 2
  • positionChanged returns the current position of the playing sound from native to java in seconds. you can disable it. It doesn't matter on which buffer you'll do your processing. – emrahgunduz Jul 31 '14 at 08:04
  • @emrahgunduz Thanks. Your callbacks don't seem to be working for me, I just have a void function of the same name in java with the declaration at the bottom. Maybe something with my environment is messed up. But also, what do you mean by "But move it to heap, as the code in the answer is working on stack and it is really slow" while processing the buffers? I'm not really sure what you mean by that, and how would I "move" it? Do you mean copy it to some other memory space? – B.C Aug 06 '14 at 10:23