I've got an AudioBuffer
with its void *mData
full of freshly-rendered audio samples using Apple's CoreAudio Audio Unit API, but I have a problem getting the samples in the right format. The ASBD of said buffer is as follows:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 41
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 32
UInt32 mReserved 0
I got this by debugging the application and executing an AudioUnitGetProperty(rioUnit, kAudioUnitProperty_StreamFormat, ...)
call. The mFormatFlags
field implies the following flags (I don't know of any formal decode method, I just got it by trying out different combinations of kAudioUnitFlags
until I got 41
):
kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat
Which type of data should I cast the buffer with? I've already tried with Float32
, SInt32
, but they're not it.
I intend to do a conversion to SInt16
afterwards, but I can't do it if I don't get the right format for the samples first.
Thanks in advance.