2

The title pretty much sums up what I'm trying to achieve. I am trying to use Michael Tyson's TPCircularBuffer inside of a render callback while the circular buffer is getting filled with incoming audio data. I want to send the audio from the render callback to the output element of the RemoteIO audio unit so I can hear it through the device speakers.

The audio is interleaved stereo 16 bit coming in as packets of 2048 frames. Here's how I've set up my audio session:

#define kInputBus 1
#define kOutputBus 0
NSError *err = nil;
NSTimeInterval ioBufferDuration = 46;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionMixWithOthers error:&err];
[session setPreferredIOBufferDuration:ioBufferDuration error:&err];
[session setActive:YES error:&err];
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;

AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, @"Can't find default output.");

AudioComponentInstanceNew(defaultOutput, &remoteIOUnit);
UInt32 flag = 0;

OSStatus status = AudioUnitSetProperty(remoteIOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kOutputBus, &flag, sizeof(flag));
size_t bytesPerSample = sizeof(AudioUnitSampleType);
AudioStreamBasicDescription streamFormat = {0};
streamFormat.mSampleRate = 44100.00;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
streamFormat.mBytesPerPacket = bytesPerSample;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = bytesPerSample;
streamFormat.mChannelsPerFrame = 2;
streamFormat.mBitsPerChannel = bytesPerSample * 8;
streamFormat.mReserved = 0;

status = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, sizeof(streamFormat));

AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = render;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct));

And here's where the audio data gets loaded into the circular buffer and used in the render callback:

#define kBufferLength 2048
-(void)loadBytes:(Byte *)byteArrPtr{
TPCircularBufferProduceBytes(&buffer, byteArrPtr, kBufferLength);
}

OSStatus render(
                void *inRefCon,
                AudioUnitRenderActionFlags *ioActionFlags,
                const AudioTimeStamp *inTimeStamp,
                UInt32 inBusNumber,
                UInt32 inNumberFrames,
                AudioBufferList *ioData)
{
AUDIOIO *audio = (__bridge AUDIOIO *)inRefCon;
AudioSampleType *outSample = (AudioSampleType *)ioData->mBuffers[0].mData;
//Zero outSample
memset(outSample, 0, kBufferLength);
int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16 *targetBuffer = (SInt16 *)ioData->mBuffers[0].mData;
//Pull audio
int32_t availableBytes;
SInt16 *buffer = TPCircularBufferTail(&audio->buffer, &availableBytes);
memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes));
TPCircularBufferConsume(&audio->buffer, MIN(bytesToCopy, availableBytes));
return noErr;
}

There is something wrong with this setup because I am not getting any audio through the speakers, but I'm also not getting any errors when I test on my device. As far as I can tell the TPCircularBuffer is being filled and read from correctly. I've followed the Apple documentation for setting up the audio session. I am considering trying to set up an AUGraph next but I want to see if anyone could suggest a solution for what I'm trying to do here. Thanks!

user3080284
  • 21
  • 1
  • 2
  • Is your audio unit play callback getting called? (NSLog or breakpoint). Is there any data in your circular buffer when it is called? If so, is that data non-zero? – hotpaw2 Jan 11 '14 at 19:46
  • Are you talking about the render callback? I know that is being called because I've previously had EXC_BAD_ACCESS errors from within the render callback on the memcpy() line. It appears as though the circular buffer is being filled, but in some instances when I try to NSLog the contents of the buffer it comes up empty, usually right after launch. This could be due to how I've structured the app to receive the audio data. Should I be filling the buffer with zeroes for silence at initialization so it has something in there? – user3080284 Jan 11 '14 at 19:57
  • Or not start the audio unit until there is enough information pre-buffered that it won't underflow. You also need to decide what sound to produce during underflow conditions, as you can't stall or wait in the audio callback. – hotpaw2 Jan 11 '14 at 20:00
  • How would I do that? I've tried wrapping AudioUnitInitalize(); with a check like this: if(&buffer != nil){ status = AudioUnitInitialize(remoteIOUnit); } I've also tried doing a check for buffer content in loadBytes: if(byteArrPtr != 0){ TPCircularBufferProduceBytes(&buffer, byteArrPtr, kBufferLength); } and I've tried creating a dummy byte array of zeroes to initialize with: Byte* firstBytes = 0; [audioController loadBytes:firstBytes]; I'm alright with silence during underflow. – user3080284 Jan 11 '14 at 20:17

3 Answers3

1

For stereo (2 channels per frame), your bytes per frame and bytes per packet have to be twice your sample size in bytes. Same with bits per channel in terms of bits.

Added: If availableBytes/yourFrameSize isn't almost always as large or larger than inNumberFrames, you won't get much continuous sound.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • Thanks for your reply. I've doubled those values for stereo which seems to have gotten rid of the unsupported data format error, but I'm still unable to get audio to play through the speakers. Can you spot any other potential issues? I feel like I have tried every possible combination of format configurations... – user3080284 Jan 11 '14 at 19:20
0

At a glance, it looks like you've got everything set up correctly. You're missing a call to AudioOutputUnitStart() though:

...
// returns an OSStatus indicating success / fail
AudioOutputUnitStart(remoteIOUnit);

// now your callback should be being called
...
admsyn
  • 1,431
  • 8
  • 19
  • Hey, thanks for the reply. I actually do call AudioOutputUnitStart() in my code right after setting up the audio session, I just forgot to include it in the post. I checked the log for the OSStatus result and I am getting an error 1718449215, which apparently corresponds to kAudioFormatUnsupportedDataFormatError. This make me think I have set up my ASBD incorrectly...I suspect it might have something to do with ASBD's mFormatFlags, but I'm not sure what else to try there for an stereo interleaved 16 bit format. – user3080284 Jan 10 '14 at 16:40
0

I believe one your problem is with using streamFormat.mBitsPerChannel = bytesPerSample * 8;

You assign bytesPerSample to be sizeof(AudioUnitSampleType) which is essentially 4 bytes.

So streamFormat.mBytesPerPacket = bytesPerSample; is ok. But the assignment streamFormat.mBitsPerChannel = bytesPerSample * 8; is saying that you want 32 bits per sample instead of 16 bits per sample.

I would not create your audio format based on AudioUnitSampleType because this has nothing to do with your personal format that you want to utilize. I would create defines and do something like this:

#define BITS_PER_CHANNEL 16
#define SAMPLE_RATE 44100.0
#define CHANNELS_PER_FRAME 2
#define BYTES_PER_FRAME CHANNELS_PER_FRAME * (BITS_PER_CHANNEL / 8)  //ie 4
#define FRAMES_PER_PACKET 1
#define BYTES_PER_PACKET FRAMES_PER_PACKET * BYTES_PER_FRAME




    streamFormat.mSampleRate = SAMPLE_RATE;  // 44100.0
    streamFormat.mBitsPerChannel = BITS_PER_CHANNEL; //16
    streamFormat.mChannelsPerFrame = CHANNELS_PER_FRAME; // 2
    streamFormat.mFramesPerPacket = FRAMES_PER_PACKET; //1
    streamFormat.mBytesPerFrame = BYTES_PER_FRAME; // 4 total,  2 for left ch,  2 for right ch

    streamFormat.mBytesPerPacket = BYTES_PER_PACKET;

    streamFormat.mReserved = 0;
    streamFormat.mFormatID = kAudioFormatLinearPCM;  // double check this also
    streamFormat.mFormatFlags = kAudioFormatFlagsCanonical;`

You also need to look at the return values set to err and status immediately after each are run. You still need to add error checking at some of the calls as well such as

checkMyReturnValueToo = AudioComponentInstanceNew(defaultOutput, &remoteIOUnit);

You also have an extremely high value for your buffer duration. You have 46 and I am not sure where that came from. That means you want 46 seconds worth of audio during each audio callback. Usually you want something less than one second depending on your latency requirements. Most likely iOS will not use anything that high but you should try setting it to say 0.025 or so (25ms). You can try to lower it if you need faster latency.

jaybers
  • 1,991
  • 13
  • 18
  • Thanks for your reply! I've structured my ASBD the way you've described, but still not getting anything from the speakers. I've checked the return values of AudioComponentInstanceNew() and AudioUnitInitialize() and they both return 0... – user3080284 Jan 11 '14 at 20:00
  • You need to also check the results of the other calls: `[session setCategory:.. [session setPreferredIOBufferDuration [session setActive:YES error:&err]; AudioUnitSetProperty(remoteIOUnit, ` – jaybers Jan 11 '14 at 21:02
  • I did an OSStatus check on those and they all return 1, so they appear okay – user3080284 Jan 11 '14 at 21:44
  • If it is returning 1, it fails. It needs to return 0. – jaybers Jan 11 '14 at 21:48
  • Did you change the buffer duration and set a different value instead of 46? – jaybers Jan 11 '14 at 21:49
  • I though NSTimeInterval was already in ms. I tried 0.046 though, but there was no change. – user3080284 Jan 12 '14 at 18:33