0

I am writing an audio application in android which records audio using AudioRecord and currently I am saving the recorded audio as a wav file. Now, I want to encode this audio data to save it in the compressed form. For compression, I am using MediaCodec to compress the raw bits into AAC (with ADST container). For now, I am first saving the recorded data into a .wav file and then reading the entire into the implemented encoder class and encoding it. I want to skip this step of first saving the file as .wav. So, what I am thinking of doing is to implement some kind of queue for storing the recorded samples in my class which records the audio and keep on polling this queue from my encoder class. The question is that should I use the Queue provided by the SDK or should I implement my own queue using arrays? The problem I might face here is that using the Queue provided by the SDK might result in possible overhead and might result in loosing some samples during the recording as the method inside which I am reading the audio data and saving it for later use is in a synchronized block:

synchronized (lock) {
        while (mIsRecording) {
            // reading the audio buffer from the audio recording object
            mAudioRecord.read(mBuffer, 0, mBufferSize / SHORT_SIZE);

            // adding the audio buffer to the recording data ...
            System.arraycopy(mBuffer, 0, mRecAudioData, mRecAudioDataIndex, mBuffer.length);

            // increment the starting index for next copy..
            mRecAudioDataIndex += mBuffer.length;

            processBuffer(mBuffer);
        }
    }

Right now, I am saving the audio data as an array for later saving it to a wav file. As explained above, I now want to enqueue the read data (mBuffer) into a queue and use it in the encoder. How should I implement the queue? Any suggestions?

Swapnil
  • 1,870
  • 2
  • 23
  • 48
  • Why do you want to store the raw data in a queue? Why not just feed it directly into the MediaCodec encoder? Generally speaking, you want to avoid copying data, and avoid making large allocations frequently. – fadden Feb 03 '16 at 17:06
  • Normally, the encoding will take some time and hence if I wait for the encoding to finish in the same thread, I might loose some samples from the AudioRecord. So, I am planning to run the encoding task in my encoder class in a separate thread. Well, that problem can also arise if I loose some time in enqueuing data in the recording thread. Additional thing that can be done is to implement the enqueuing operation in separate thread as well. What do you think? @fadden – Swapnil Feb 03 '16 at 17:35
  • I think you'll have too many threads. The encoder will be able to keep up with you; if it couldn't, the device wouldn't be able to record movies. You're introducing overhead and slowing everything down by copying data around. The only place I've found where splitting stuff out is useful is when writing encoded data to disk, notably with MediaMuxer. The Horizon Camera site has a nice article about it (http://blog.horizon.camera/post/134263616000/optimizing-mediamuxers-writing-speed). – fadden Feb 03 '16 at 18:28

0 Answers0