I am writing an audio application in android which records audio using AudioRecord
and currently I am saving the recorded audio as a wav file. Now, I want to encode this audio data to save it in the compressed form. For compression, I am using MediaCodec
to compress the raw bits into AAC (with ADST container). For now, I am first saving the recorded data into a .wav
file and then reading the entire into the implemented encoder class and encoding it. I want to skip this step of first saving the file as .wav
. So, what I am thinking of doing is to implement some kind of queue for storing the recorded samples in my class which records the audio and keep on polling this queue from my encoder class. The question is that should I use the Queue
provided by the SDK or should I implement my own queue using arrays? The problem I might face here is that using the Queue
provided by the SDK might result in possible overhead and might result in loosing some samples during the recording as the method inside which I am reading the audio data and saving it for later use is in a synchronized
block:
synchronized (lock) {
while (mIsRecording) {
// reading the audio buffer from the audio recording object
mAudioRecord.read(mBuffer, 0, mBufferSize / SHORT_SIZE);
// adding the audio buffer to the recording data ...
System.arraycopy(mBuffer, 0, mRecAudioData, mRecAudioDataIndex, mBuffer.length);
// increment the starting index for next copy..
mRecAudioDataIndex += mBuffer.length;
processBuffer(mBuffer);
}
}
Right now, I am saving the audio data as an array for later saving it to a wav file. As explained above, I now want to enqueue the read data (mBuffer
) into a queue and use it in the encoder. How should I implement the queue? Any suggestions?