0

I used Apple`s SpeakHere sample code. Here are my mRecordFormat:

mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mSampleRate = 11025.0;

I'm recording 7 seconds, and I assume that I will receive 7 * 2 * 11025 bytes(or 7 * 11025 short) in total information. Actually I'm receiving a little more in total: 154784 instead of 154350 bytes(434 bytes more). This number vary. And why it is changing?

Could someone please explain why I'm getting more bytes then I'm expecting? Am I wrong or missing something?

BTW: I'm recording in .wav format, if this helps.

David V
  • 2,134
  • 1
  • 16
  • 22

1 Answers1

0

The AudioQueue is giving you audio chunks of a certain size. The size is chosen to suit the implementation. This explains why you see more (or less) than you expect when you stop the queue.

If you want to record exactly 7 seconds of audio, stop the queue after you have received exactly 7 seconds' worth of samples and discard any leftovers.

Rhythmic Fistman
  • 34,352
  • 5
  • 87
  • 159
  • Thanks for answer, Actually what I'm trying to do is following: I'm playing audio of 7 second, and in parallel recording the results. I'm stoping recording when audio get played. I stop recording when audio get played. I suppose some delays happen until audio played. Could you please suggest how can I detect exactly 7 seconds of recording? Thanks in advance. – David V Apr 19 '15 at 11:00