2

I've been trawling the internet for ages trying to find the cause of this error but I'm stuck. I've been following the Apple Developer documentation for using Audio Services to record audio and I keep getting this error whatever I do.

I can record audio fine using AVAudioRecorder into any format but my end game is to obtain a normalised array of floats from the input data in order to apply an FFT to it (sorry for the noob phrasing I'm very new to audio programming).

Here's my code:

- (void)beginRecording
{
    // Initialise session
    [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
    [[AVAudioSession sharedInstance] setActive:YES error:nil];

    state.dataFormat.mFormatID = kAudioFormatLinearPCM;
    state.dataFormat.mSampleRate = 8000.0f;
    state.dataFormat.mChannelsPerFrame = 1;
    state.dataFormat.mBitsPerChannel = 16;
    state.dataFormat.mBytesPerPacket = state.dataFormat.mChannelsPerFrame * sizeof(SInt16);
    state.dataFormat.mFramesPerPacket = 1;

    //AudioFileTypeID fileID = kAudioFileAIFFType;

    state.dataFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;

    OSStatus err = AudioQueueNewInput(&state.dataFormat, handleInputBuffer, &state, CFRunLoopGetMain(), kCFRunLoopCommonModes, 0, &state.queue);
    printf("%i", err); // this is always -50 i.e. invalid parameters error

    deriveBufferSize(state.queue, state.dataFormat, 0.5, &state.bufferByteState);

    for (int i = 0; i < kNumberOfBuffers; i++) {
        AudioQueueAllocateBuffer(state.queue, state.bufferByteState, &state.buffers[i]);
        AudioQueueEnqueueBuffer(state.queue, state.buffers[i], 0, NULL);
    }

    state.currentPacket = 0;
    state.isRunning = YES;

    AudioQueueStart(state.queue, NULL);
}

- (void)endRecording
{
    AudioQueueStop(state.queue, YES);
    state.isRunning = NO;

    AudioQueueDispose(state.queue, YES);

    // Close the audio file here...
}

#pragma mark - CoreAudio

// Core Audio Callback Function
static void handleInputBuffer(void *agData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {

    AQRecorderState *state = (AQRecorderState *)agData;

    if (inNumPackets == 0 && state->dataFormat.mBytesPerPacket != 0) {
        inNumPackets = inBuffer->mAudioDataByteSize / state->dataFormat.mBytesPerPacket;
    }

    printf("Called");

    /*
    if (AudioFileWritePackets(state->audioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, state->currentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
        state->currentPacket += inNumPackets;
    }
     */

    if (state->isRunning) {
        AudioQueueEnqueueBuffer(state->queue, inBuffer, 0, NULL);
    }
}

void deriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription ABSDescription, Float64 secs, UInt32 *outBufferSize) {

    static const int maxBufferSize = 0x50000;

    int maxPacketSize = ABSDescription.mBytesPerPacket;
    if (maxPacketSize == 0) {
        UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
        AudioQueueGetProperty(audioQueue, kAudioConverterPropertyMaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
    }

    Float64 numBytesForTime = ABSDescription.mSampleRate * maxPacketSize * secs;
    UInt32 x = (numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
    *outBufferSize = x;
}

If anyone knows what's going on here I'd be very grateful. Here is the apple docs for the error

Rob Sanders
  • 5,197
  • 3
  • 31
  • 58
  • Post the console output for the error, it will make it easier to help. – Cliff Ribaudo May 23 '15 at 13:26
  • it's -50 as in the question. I.e. `errSecParam` [here be docs](https://developer.apple.com/library/ios/documentation/Security/Reference/keychainservices/index.html#//apple_ref/c/econst/errSecParam) – Rob Sanders May 23 '15 at 13:27

1 Answers1

3

You are getting a -50 (kAudio_ParamError) because you haven't initialised AudioStreamBasicDescription's mBytesPerFrame field:

asbd.mBytesPerFrame = asbd.mFramesPerPacket*asbd.mBytesPerPacket;

where asbd is short for state.dataFormat. In your case mBytesPerFrame = 2.

I also wouldn't specify the kLinearPCMFormatFlagIsBigEndian, let the recorder return you native byte order samples.

Rhythmic Fistman
  • 34,352
  • 5
  • 87
  • 159
  • Thanks so much. A bit of a follow up: I'm trying to do an FFT to get the average HZ of the entire (~5 second) recording. Weirdly the result is almost always a B (250 HZ) no matter what the note recorded is. I think the error stems from my calculations of the `frame size`. I'm assuming the `mBytesPerFrame` is the frame size, is this usually measured in bits? i.e. in this case 32 bits because I have a sample depth of 16 bits. – Rob Sanders May 25 '15 at 11:00
  • `mBytesPerFrame` is measured in bytes. Do you have another question for the FFT problem? That sounds interesting. – Rhythmic Fistman May 25 '15 at 11:38
  • Yes! I'm following the code [here](https://github.com/krafter/DetectingAudioFrequency/tree/master/DetectingSoundFrequency) as a way of getting started. It uses Apple's Accelerate framework to do all the FFT calculations. Originally I was recording a sound using `AVAudioRecorder` and then reading the resultant file into an `NSData` then getting a `UInt16` array from the data, normalising it into a `float` array and then calculating the FFT of the entire array. Occasionally it would come out with some close reading but more often than not the HZ would be the same regardless of note. – Rob Sanders May 25 '15 at 11:44
  • Now I thought I'd try and access the `float` array using CoreAudio (bit stuck there atm) and FFT the resultant data in chunks, but I haven't got that far yet. Any pointers? – Rob Sanders May 25 '15 at 11:46