I am in the process of trying to write an App that will do digital signal processing and want to make it as light as possible. One thing that confounded me for a while was just what the default values for various devices might be so that I could avoid unwanted conversions taking place before I received the data from the buffers. I came across the following link http://club15cc.com/code-snippets/ios-2/get-the-default-output-stream-format-for-an-audio-unit-in-ios which set me on what I believe to be the right path.
I've extended the code from the link to create and activate an AVAudioSession prior to getting the ASBD(AudioStreamBasicDescription) contents, the AudioSession can then be used to request various "Preferred" settings to see what impacts they have. I also combined the Apple code for listing the values of an ASBD with the code from the above link.
The code below is put into the ViewController.m file generated by selecting the Single View Application template. Note you will need to add the AudioToolbox.framework and CoreAudio.framework to the Linked Frameworks and Libraries of the project.
#import "ViewController.h"
@import AVFoundation;
@import AudioUnit;
@interface ViewController ()
@end
@implementation ViewController
- (void) printASBD:(AudioStreamBasicDescription) asbd {
char formatIDString[5];
UInt32 formatID = CFSwapInt32HostToBig (asbd.mFormatID);
bcopy (&formatID, formatIDString, 4);
formatIDString[4] = '\0';
NSLog (@" Sample Rate: %10.0f", asbd.mSampleRate);
NSLog (@" Format ID: %10s", formatIDString);
NSLog (@" Format Flags: %10X", (unsigned int)asbd.mFormatFlags);
NSLog (@" Bytes per Packet: %10d", (unsigned int)asbd.mBytesPerPacket);
NSLog (@" Frames per Packet: %10d", (unsigned int)asbd.mFramesPerPacket);
NSLog (@" Bytes per Frame: %10d", (unsigned int)asbd.mBytesPerFrame);
NSLog (@" Channels per Frame: %10d", (unsigned int)asbd.mChannelsPerFrame);
NSLog (@" Bits per Channel: %10d", (unsigned int)asbd.mBitsPerChannel);
}
- (void)viewDidLoad
{
[super viewDidLoad];
NSError *error = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
// Get a reference to the AudioSession and activate it
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[audioSession setActive:YES error:&error];
// Then get RemoteIO AudioUnit and use it to get the content of the default AudioStreamBasicDescription
AudioUnit remoteIOUnit;
AudioComponentDescription audioComponentDesc = {0};
audioComponentDesc.componentType = kAudioUnitType_Output;
audioComponentDesc.componentSubType = kAudioUnitSubType_RemoteIO;
audioComponentDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent audioComponent = AudioComponentFindNext(NULL, &audioComponentDesc);
AudioComponentInstanceNew(audioComponent, &remoteIOUnit);
// Read the stream format
size_t asbdSize = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription asbd = {0};
AudioUnitGetProperty(remoteIOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
(void *)&asbd,
&asbdSize);
[self printASBD:asbd];
}
@end
I would be interested in knowing the results people obtain for other actual hardware. Note the code was built and deployed to IOS 7.1