1

so i'm using Apple's MixerHost sample code to do a basic audiograph setup for stereo synthesis. I have some trouble figuring out how i have to fill the buffer slice. Specifically, i get audio out only in the left channel, the right channel is silent:

AudioUnitSampleType *buffer = (AudioUnitSampleType *)ioData->mBuffers[0].mData;    
SInt16 sampleValue;

for(UInt32 i = 0; i < inNumberFrames; i++)
{
    sampleValue = sinf(inc) * 32767.0f; // generate sine signal
    inc += .08;

    buffer[i] = sampleValue;
}

if(inc > 2e10) inc -= 2e10;

This plays a sine wave on the left channel... The pitch kind of changes every 10 seconds or so, another indicator that i'm doing it wrong :]

i've tried other ways of stepping through the array. this produced interesting sounds which were far from a sine signal. At one point i had glitchy/choppy output on both channels, which was kind of like a success.

If i inspect the AudioBuffer struct, it confirms there are 2 channels, and the bytesize per frame is 4. So per frame, there are two SInt16, right? One for left, and one for the right channel.. and they are supposed to be interleaved?

Note that i am using a stream format that is different from Apple's example because i don't know fixed point math.

The stream format is setup like so:

size_t bytesPerSample = sizeof (AudioUnitSampleType);

stereoStreamFormat.mFormatID          = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
stereoStreamFormat.mBytesPerPacket    = bytesPerSample;
stereoStreamFormat.mFramesPerPacket   = 1;
stereoStreamFormat.mBytesPerFrame     = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame  = 2;                    
stereoStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate        = graphSampleRate;

so my question is, how do i fill a stereo buffer which is setup like above with data so that it just works?

thanks!

Jakob
  • 872
  • 10
  • 26

2 Answers2

2

Take a look at Classes/MixerHostAudio.m in the MixerHost example, and scroll down to where they define and assign outSamplesChannelLeft and outSamplesChannelRight. It looks like the API expects left and right samples in different buffers, not interleaved.

As for the changing pitch, try

if (inc > M_PI) inc -= 2.0*M_PI;

(or whatever Apple defines in place of M_PI) and do this within the loop, not after filling the whole frame. Floating point error accumulates surprisingly quickly. The correction above uses the fact that sin is periodic over 2*pi. Your correction arbitrarily wraps inc back in, and will cause a glitch at the wrap point if the wrapping isn't phase continuous.

And finally, it's not clear to me if your bytesPerSample is 2 or not, you might want to check this. If it is, then I'd guess your other assumption about bytesPerFrame is correct.

mtrw
  • 34,200
  • 7
  • 63
  • 71
  • Regarding the two different buffers, when i inspect the *ioData AudioBufferList, it says the mBuffers array is of size 1. The one AudioBuffer i get has two channels though, is what it says on the box. I reckon this comes from using a different stream format than the MixerHost example, which i took from a different example to get around the fixed point math: `kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked` instead of `kAudioFormatFlagsAudioUnitCanonical`. – Jakob May 23 '11 at 00:02
  • I'd just leave the format flags alone if I were you. You're using floating point math but casting the result back to integer, so I don't think you need to worry about it. – mtrw May 23 '11 at 00:21
  • okay i'm using `kAudioFormatFlagsAudioUnitCanonical` now, which surprisingly works with the cast like you suggested, however the output volume then is quite low for some reason... if i boost the amplitude, the audio starts clipping without increase in volume. the `bytesPerSample` is 4.. – Jakob May 23 '11 at 00:59
  • i'm kind of floating here, so just posting what's going on... if i get a void pointer to the buffer like so: `void *buffer = ioData->mBuffers[0].mData;` and then walk through it in the loop like this: `*(AudioUnitSampleType*)buffer = sampleValue; buffer+=8;` i get a stereo signal. No sine signal, more like a pulsing/phasing thing. But this shows that the buffer is indeed interleaved... somehow.. increasing the pointer by 4 yields the same result as going through the array with `[i]`.. sine on the left channel.. – Jakob May 23 '11 at 00:59
  • if `bytesPerSample` is 4, shouldn't you scale by `2**31`? And if I recall correctly, you declare the number of buffers when you call `AudioQueueAllocateBuffer`. It's been a while since did any Core Audio stuff though, so this might not be quite right. – mtrw May 23 '11 at 01:17
  • unfortunately i really don't know how this scaling thing works, if you mean the multiplication by `32767.0f`, i just copypasta'd it from another example.. i don't understand the two `*` here... but thanks for your help mtrw! – Jakob May 23 '11 at 01:22
  • The `**` means `to the power of`. If the audio sample is a 2 byte signed integer, it should go from 2**15-1..-2**15 = 32767..-32768. If it's a 4 byte signed integer, it should go from 2**32-1..-2**31 = 2,147,483,647..-2,147,483,648. You can get the appropriate values from `limits.h`, usually. `sin` goes from +1..-1, so if you multiply by 2**15-1 => 32767, you automatically get a result in the range 32767..-32767. This is standard practice in converting from floating point to fixed point, you picked a good example! – mtrw May 23 '11 at 01:28
  • hey thanks for pointing that out... okay if i scale by 2**31 = 2147483648.0, guess the output though... pretty close to white noise! :] *scratches head* – Jakob May 23 '11 at 01:40
  • it works! using `kAudioFormatFlagsAudioUnitCanonical` indeed generates 2 buffers. the only thing now is the low volume in the audio output.. guess i have to look into fixed point math a bit... – Jakob May 23 '11 at 02:05
1

You are setting mBytesPerFrame = bytesPerSample. This only allows one sample per frame. For interleaved stereo (i.e. mChannelsPerFrame = 2) you need two samples per frame. Try setting mBytesPerFrame = 2 * bytesPerSample.

In your render function:

UInt32 numOfSamples = 2 * inNumberFrames;
for (i = 0; i < numOfSamples; i+=2) {
 buffer[i] = //assign left channel value;
 buffer[i+1] = //assign right channel value;
}