0

First of all I am new bee on c and objective c

I try to fft a buffer of audio and plot the graph of it. I use audio unit callback to get audio buffer. the callback brings 512 frames but after 471 frames it brings 0. (I dont know this is normal or not. It used to bring 471 frames with full of numbers. but now somehow 512 frames with 0 after 471. Please let me know if this is normal)

Anyway. I can get the buffer from the callback, apply fft and draw it . this works perfect. and here is the outcome below. the graph very smooth as long as I get buffer in each callback

enter image description here

but in my case I need 3 second of buffer in order to apply fft and draw. so I try to concatenate the buffers from two callback and then apply fft and draw it. but the result is not like what I expect . while the above one is very smooth and precise during record( only the magnitude change on the 18 and 19 khz), when I concatenate the two buffers, the simualator display mainly two different views that swapping between them very fast. they are displayed below. Of course they basically display 18 and 19 khz. but I need precise khz so I can apply more algorithms for the app I work on.

enter image description here enter image description here

and here is my code in callback

//FFTInputBufferLen, FFTInputBufferFrameIndex is gloabal
//also tempFilteredBuffer is allocated in global

//by the way FFTInputBufferLen = 1024;

static OSStatus performRender (void                         *inRefCon,
                           AudioUnitRenderActionFlags   *ioActionFlags,
                           const AudioTimeStamp         *inTimeStamp,
                           UInt32                       inBusNumber,
                           UInt32                       inNumberFrames,
                           AudioBufferList              *ioData)
{
    UInt32 bus1 = 1;
    CheckError(AudioUnitRender(effectState.rioUnit,
                           ioActionFlags,
                           inTimeStamp,
                           bus1,
                           inNumberFrames,
                           ioData), "Couldn't render from RemoteIO unit");


Float32 * renderBuff = ioData->mBuffers[0].mData;

ViewController *vc = (__bridge ViewController *) inRefCon;

    // inNumberFrames comes 512 as I described above
    for (int i = 0; i < inNumberFrames ; i++)        
    {

        //I defined InputBuffers[5] in global. 
        //then added 5 Float32 InputBuffers and allocated in global

        InputBuffers[bufferCount][FFTInputBufferFrameIndex] = renderBuff[i];  
        FFTInputBufferFrameIndex ++;

        if(FFTInputBufferFrameIndex == FFTInputBufferLen)
        {
            int bufCount = bufferCount;

            dispatch_async( dispatch_get_main_queue(), ^{

                tempFilteredBuffer = [vc FilterData_rawSamples:InputBuffers[bufCount] numSamples:FFTInputBufferLen];
                [vc CalculateFFTwithPlotting_Data:tempFilteredBuffer NumberofSamples:FFTInputBufferLen ];

                free(InputBuffers[bufCount]);
                InputBuffers[bufCount] = (Float32*)malloc(sizeof(Float32) * FFTInputBufferLen);
            });

            FFTInputBufferFrameIndex = 0;
            bufferCount ++;
            if (bufferCount == 5)
            {
                bufferCount = 0;
            }
        }

    }

return noErr;
}

here is my AudioUnit setup

- (void)setupIOUnit
{

AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;

AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of AURemoteIO");


UInt32 one = 1;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on AURemoteIO");

// I removed this in order to not getting recorded audio back on speakers! Am I right?
//CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on AURemoteIO");


UInt32 maxFramesPerSlice = 4096;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on AURemoteIO");

UInt32 propSize = sizeof(UInt32);
CheckError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on AURemoteIO");


AudioUnitElement bus1 = 1;

AudioStreamBasicDescription myASBD;

myASBD.mSampleRate = 44100;
myASBD.mChannelsPerFrame = 1;

myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mBytesPerFrame = sizeof(Float32) * myASBD.mChannelsPerFrame ;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerPacket = myASBD.mFramesPerPacket * myASBD.mBytesPerFrame;
myASBD.mBitsPerChannel = sizeof(Float32) * 8 ;
myASBD.mFormatFlags = 9 | 12 ;



 // I also remove this for not getting audio back!!

//    CheckError(AudioUnitSetProperty (_rioUnit,
//                                     kAudioUnitProperty_StreamFormat,
//                                     kAudioUnitScope_Input,
//                                     bus0,
//                                     &myASBD,
//                                     sizeof (myASBD)), "Couldn't set ASBD for RIO on input scope / bus 0");


CheckError(AudioUnitSetProperty (_rioUnit,
                                 kAudioUnitProperty_StreamFormat,
                                 kAudioUnitScope_Output,
                                 bus1,
                                 &myASBD,
                                 sizeof (myASBD)), "Couldn't set ASBD for RIO on output scope / bus 1");



effectState.rioUnit = _rioUnit;

AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = (__bridge void *)(self);
CheckError(AudioUnitSetProperty(_rioUnit,
                                kAudioUnitProperty_SetRenderCallback,
                                kAudioUnitScope_Input,
                                0,
                                &renderCallback,
                                sizeof(renderCallback)), "couldn't set render callback on AURemoteIO");

CheckError(AudioUnitInitialize(_rioUnit), "couldn't initialize AURemoteIO instance");

}

My questions are : why this happens, why there are two main different views on output when I concatenate the two buffers. is there another way to collect buffers and apply DSP? what do I do wrong! if the way I concatenate is correct, is my logic incorrect? (though I checked it many times)

Here I try to say : how can I get 3 sn of buffer in perfect condition

I really need help , best Regards

smoothumut
  • 3,423
  • 1
  • 25
  • 35
  • This sounds that you have too many computation steps in your render callback. Just two hints: Reduce the sampling rate or replace the `dispatch_async` part with something simple just to see if I am right or wrong. – Michael Dorner Oct 31 '14 at 16:54
  • Hi Micheal, thanks for the comment. I need 44100 sampling rate and I am new so honestly I dont know any thing than dispatch_async – smoothumut Oct 31 '14 at 18:12

2 Answers2

1

Your render callback may be writing data into the same buffer that is being processed in another thread (the main queue), thus overwriting and altering part of the data being processed.

Try using more than one buffer. Don't write into a buffer that is still being processed (by your filter & fft methods). Perhaps recycle the buffers for reuse after the FFT calculation method is finished.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • thank you very much hotpaw2 for your contributions. I have added buffers now it is better but still I got the same graph for the 18 and 19 khz sounds. the plots are not stable while the sounds are coming stable. I am suspecting from 512 the frames coming from each render. As I mentioned they come 512 data but only the first 471 of them have data. the rest is 0.0000 . I dont know why? I dont know if this is normal or not? I also add setupAudioUnit. I just need to record so I set the session category as AVAudioSessionCategoryRecord . Thank you very much in advance for your help – smoothumut Nov 03 '14 at 08:38
  • I have found the problem. when I set the session category to AVAudioSessionCategoryPlayAndRecord and then when I comment out the 2 AuidoUnitSetProperty lines I mentioned on above code, the concatenating the buffers works great. it is again like the first graph I added above. in this case each render brings 470-471 frames. so adding them works. But now it playthrough the recorded audio. I dont want it to play the recorded audio. I am happy to make it work but also confused why this works when I make it playAndRecord? I am sure you have an idea of that. :) thanks in advance, Regards – smoothumut Nov 03 '14 at 09:17
0

I have successfully cancatenated the buffers without any unstable graphics. how did I do is to convert AVAudioSession category to PlayAndRecord from Record. then I have commented out the two AudioUnitSetProperty lines . then I started to get 470~471 frames per render. then I cancenated them like I did on the code I have posted. I have also used buffers in the code too. Now it works. But now it plays through the sound . In order to close it I applied the code below

for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
{
    memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}

then I started to get 3sec of buffers. when I plot it on the screen I got a similar view of the first graph

smoothumut
  • 3,423
  • 1
  • 25
  • 35