I'm trying to figure out the amplitude of each frequency of sound captured by microphone.
Just like this example https://developer.apple.com/documentation/accelerate/visualizing_sound_as_an_audio_spectrogram
I captured sample from microphone to sample buffer, copy to a circle buffer, and then performed ForwardDCT on it, just like this:
func processData(values: [Int16]) {
vDSP.convertElements(of: values,
to: &timeDomainBuffer)
vDSP.multiply(timeDomainBuffer,
hanningWindow,
result: &timeDomainBuffer)
forwardDCT.transform(timeDomainBuffer,
result: &frequencyDomainBuffer)
vDSP.absolute(frequencyDomainBuffer,
result: &frequencyDomainBuffer)
vDSP.convert(amplitude: frequencyDomainBuffer,
toDecibels: &frequencyDomainBuffer,
zeroReference: Float(Microphone.sampleCount))
if frequencyDomainValues.count > Microphone.sampleCount {
frequencyDomainValues.removeFirst(Microphone.sampleCount)
}
frequencyDomainValues.append(contentsOf: frequencyDomainBuffer)
}
the timeDomainBuffer is the float16 Array contains samples counting sampleCount, while the frequencyDomainBuffer is the amplitude of each frequency, frequency is denoted as it's array index with it's value expressing amplitude of this frequency.
I'm trying to get amplitude of each frequency, just like this:
for index in frequencyDomainBuffer{
let frequency = index * (AVAudioSession().sampleRate/Double(Microphone.sampleCount)/2)
}
I supposed the index of freqeuencyDomainBuffer will be linear to the actual frequency, so sampleRate divided by half of sampleCount will be correct. (sampleCount is the timeDomainBuffer length)
The result is correct when running on my iPad, but the frequency got 10% higher when on iPhone.
I'm dubious whether AVAudioSession().sampleRate
can be used on iPhone?
Of course I can add a condition like if iPhone
, but I'd like to know why and will it be correct on other devices I haven't tested on?