2

I am writing an application that needs to detect a frequency in the audio stream. I have read about a million articles and am having problems crossing the finish line. I have my audio data coming to me in this function via the AVFoundation Framework from Apple.

I am using Swift 4.2 and have tried playing with the FFT functions, but they are a little over my head at the current moment.

Any thoughts?

// get's the data as a call back for the AVFoundation framework.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    // prints the whole sample buffer and tells us alot of information about what's inside
    print(sampleBuffer);

    // create a buffer, ready out the data, and use the CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer method to put
    // it into a buffer
    var buffer: CMBlockBuffer? = nil
    var audioBufferList = AudioBufferList(mNumberBuffers: 1,
                                          mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), blockBufferOut: &buffer);

    let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
    var sum:Int64 = 0
    var count:Int = 0
    var bufs:Int = 0

    var max:Int64 = 0;
    var min:Int64 = 0

    // loop through the samples and check for min's and maxes.
    for buff in abl {
        let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(OpaquePointer(buff.mData)),
                                                        count: Int(buff.mDataByteSize)/MemoryLayout<Int16>.size)
        for sample in samples {
            let s = Int64(sample)
            sum = (sum + s*s)
            count += 1

            if(s > max) {
                max = s;
            }

            if(s < min) {
                min = s;
            }

            print(sample)
        }
        bufs += 1
    }

    // debug
    print("min - \(min), max = \(max)");

    // update the interface
    DispatchQueue.main.async {
        self.frequencyDataOutLabel.text = "min - \(min), max = \(max)";
    }

    // stop the capture session
    self.captureSession.stopRunning();
}
Gregg Harrington
  • 123
  • 1
  • 11

1 Answers1

0

After much research I found that the answer is to use an FFT method (Fast Fourier Transform). It takes the raw input from the iPhone's code above and converts it into an array of values representing magnitude of each frequency in bands.

Much props to the open code here https://github.com/jscalo/tempi-fft that created a visualizer that captures the data, and displays it. From there, it was a matter of manipulating it to meet the needs. In my case I was looking for frequencies very high above human hearing (20kHz range). By scanning the later half of the array in the tempi-fft code I was able to determine if frequencies I was looking for were present and loud enough.

Gregg Harrington
  • 123
  • 1
  • 11