1

I visualized an audiofile with WebAudioAPI and with Dancer.js. All works well but the visualizations looks very different. Can anybody help me to find out why it looks so different?

The Web-Audio-API code (fft.php, fft.js)

The dancer code (plugins/dancer.fft.js, js/playerFFT.js, fft.php)

The visualization for WebAudioAPI is on: http://multimediatechnology.at/~fhs32640/sem6/WebAudio/fft.html

For Dancer is on http://multimediatechnology.at/~fhs32640/sem6/Dancer/fft.php

MarijnS95
  • 4,703
  • 3
  • 23
  • 47
user2090392
  • 51
  • 1
  • 1
  • 3

3 Answers3

1

The difference is in how the volumes at the frequencies are 'found'. Your code uses the analyser, which takes the values and also does some smoothing, so your graph looks nice. Dancer uses a scriptprocessor. The scriptprocessor fires a callback every time a certain sample length has gone through, and it passes that sample to e.inputBuffer. Then it just draws that 'raw' data, no smoothing applied.

var
    buffers = [],
    channels = e.inputBuffer.numberOfChannels,
    resolution = SAMPLE_SIZE / channels,
    sum = function (prev, curr) {
        return prev[i] + curr[i];
    }, i;

for (i = channels; i--;) {
    buffers.push(e.inputBuffer.getChannelData(i)); 
}

for (i = 0; i < resolution; i++) {
    this.signal[i] = channels > 1 ? buffers.reduce(sum) / channels : buffers[0][i];
}

this.fft.forward(this.signal);
this.dancer.trigger('update');

This is the code that Dancer uses to get the sound strength at the frequencies.

(this can be found in adapterWebAudio.js).

MarijnS95
  • 4,703
  • 3
  • 23
  • 47
1

Because one is simply using the native frequency data provided by the Web Audio API using analyser.getByteFrequencyData().

The other doing its own calculation by using a ScriptProcessorNode and then when that node's onaudioprocess event fires, they take the channel data from the input buffer and convert that to a frequency domain spectra by performing a forward transform on it and then calculating the Discrete Fourier Transform of the signal with the Fast Fourier Transform algorithm.

idbehold
  • 16,833
  • 5
  • 47
  • 74
0

idbehold's answer is partially correct (smoothing is getting applied), but a bigger issue is that the Web Audio code is using getByteFrequencyData instead of getFloatFrequencyData. The "byte" version does processing to maximize the byte's range - it spreads minDb to maxDb across the 0-255 byte range.

cwilso
  • 13,610
  • 1
  • 30
  • 35
  • So if I change the getByteFrequencyData to getFloatFrequencyData and Uint8array to Float32Array it should the webAudioAPI should look like the Dancer.js?! – user2090392 Apr 26 '14 at 05:28
  • I don't know precisely what Dancer.js does. That will perform a straightforward FFT, although as idbehold said by default there's smoothing on an AnalyserNode. – cwilso Apr 28 '14 at 15:20