3

I am using OfflineAudioContext to do waveform analysis in the background.

All works fine in Chrome, Firefox and Opera but in Safari I get a very dodgy behaviour. The waveform should be composed by many samples (329), but in Safari the samples are only ~38.

window.AudioContext = window.AudioContext || window.webkitAudioContext;
window.OfflineAudioContext = window.OfflineAudioContext || 
window.webkitOfflineAudioContext;

const sharedAudioContext = new AudioContext();

const audioURL = 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1141585/song.mp3';

const audioDidLoad = ( buffer ) =>
{
  console.log("audio decoded");
  var samplesCount = 0;
  const context = new OfflineAudioContext(1, buffer.length, 44100);
  const source = context.createBufferSource();
  const processor = context.createScriptProcessor(2048, 1, 1);

  const analyser = context.createAnalyser();
  analyser.fftSize = 2048;
  analyser.smoothingTimeConstant = 0.25;

  source.buffer = buffer;

  source.connect(analyser);
  analyser.connect(processor);
  processor.connect(context.destination);

  var freqData = new Uint8Array(analyser.frequencyBinCount);
  processor.onaudioprocess = () =>
  {
    analyser.getByteFrequencyData(freqData);
    samplesCount++;
  };

  source.start(0);
  context.startRendering();

  context.oncomplete = (e) => {
    document.getElementById('result').innerHTML = 'Read ' + samplesCount + ' samples';

   source.disconnect( analyser );
    processor.disconnect( context.destination );
  };
};

var request = new XMLHttpRequest();
request.open('GET', audioURL, true);
request.responseType = 'arraybuffer';
request.onload = () => {
  var audioData = request.response;
  sharedAudioContext.decodeAudioData(
    audioData,
    audioDidLoad,
    e => { console.log("Error with decoding audio data" + e.err); }
  );
};
request.send();

See Codepen.

Nuthinking
  • 1,211
  • 2
  • 13
  • 32
  • windows 10 firefox "Read 2878 samples" –  Oct 07 '17 at 15:35
  • @headmax, that's great, the more the merrier! ;) Safari Mac is the issue though. – Nuthinking Oct 07 '17 at 15:36
  • The reason why this web api doesn't run on safari, because this api is to younger and you need to do a polyfill to use it cross browser here the polyfill https://github.com/jonathantneal/AudioContext –  Oct 07 '17 at 15:42
  • @headmax have you tried the Codepen on a Mac? This API does work. Probably better than with a 4 years old polyfill. – Nuthinking Oct 07 '17 at 15:47
  • i dunno have mac ;) so try it yourself. yes this link is old i guess but as your problem isn't out of date ;) try it and if doesn't run try to understand what the meaning with this polyfill and adapt to your context. –  Oct 07 '17 at 15:50
  • @headmax my issue is with with OfflineAudioContext, not AudioContext btw. – Nuthinking Oct 07 '17 at 15:56
  • the function isn't the same ok but the issue is try this link https://github.com/shinnn/AudioContext-Polyfill (OfflineAudioContext webkitOfflineAudioContext) –  Oct 07 '17 at 15:59
  • @headmax tried (https://codepen.io/nuthinking/pen/LzQaOx) no difference. Thanks! – Nuthinking Oct 07 '17 at 16:16
  • sorry for your issue i can't test this is example here did change anything? https://mdn.github.io/webaudio-examples/offline-audio-context-promise/ –  Oct 07 '17 at 16:48
  • @headmax also this code seems unrelated to my issue. Thanks anyway. – Nuthinking Oct 07 '17 at 20:09

1 Answers1

2

I think here, Safari has the correct behavior, not the others. The way onaudioprocess works is like this: you give a buffer size (first parameter when you create your scriptProcessor, here 2048 samples), and each time this buffer will be processed, the event will be triggered. So you take your sample rate (which by default is 44.1 kHz, meaning 44100 sample per second), then divide by the buffer size, which is the number of sample that will be processed each time, and you get the number of time per second that an audioprocess event will be triggered. See https://webaudio.github.io/web-audio-api/#OfflineAudioContext-methods

This value controls how frequently the onaudioprocess event is dispatched and how many sample-frames need to be processed each call.

That's true when you're actually playing the sound. You need to prcess the proper amount in the proper time so that the sounds is played correctly. But offlineAudioContext processes the audio without caring about the real playback time.

It does not render to the audio hardware, but instead renders as quickly as possible, fulfilling the returned promise with the rendered result as an AudioBuffer

So with OfflineAudioContext, there's no need to have a time calculation. Chrome and others seem to trigger onaudioprocess each time a buffer is processed, but with offline audio context, it shouldn't really be necessary.

That being said, there's also normally no need to use onaudioprocess with offlineAudioContext, except maybe to have a sense of the performance. All data is available from the context. Also, the 329 samples doesn't mean much, it's basically only the number of samples divided by the buffer size. In your example you have a source of 673830 samples, at 44100 samples per second. So your audio is 15,279 seconds. If you process 2048 samples at a time, you process audio about 329 times, which is your 329 that you get with Chrome. No need to use onaudioprocess to get this number.

And since you use the offline audio context, there's no need to process these samples in real time, or even to call the onaudioprocess at each 2048 samples.

Julien Grégoire
  • 16,864
  • 4
  • 32
  • 57
  • Thanks a lot for the feedback! Will try to digest it and find a way to use it with analyzer.getByteFrequencyData. Which is ultimately what I need for FFT. – Nuthinking May 19 '18 at 08:03
  • Does this look right to you? https://code.i-harness.com/en/q/7b33a8 It only calls getByteFrequencyData once, it can't be right!?! – Nuthinking May 19 '18 at 08:08
  • 1
    mmm: https://stackoverflow.com/questions/25368596/web-audio-offline-context-and-analyser-node – Nuthinking May 19 '18 at 08:24
  • The comment is 2 years old, it may be working properly at this point (not in Safari obviously). That being said, you could user a library to analyze the rendered buffer. If you use getChannelData with offline, you get the same info as getFloatTimeData with on process, which is what is used to get to getByteFrequencyData. But it seems the API doesn't expose the functions to get from time data to frequency, except through onprocess event. But it could be done with a library. – Julien Grégoire May 22 '18 at 20:44
  • 1
    Why would a library do it and not me? – Nuthinking May 23 '18 at 05:43
  • And my issue is with Safari. – Nuthinking May 23 '18 at 10:08
  • regarding the library I was just referencing a comment in the question you linked. There are libraries that can do the same thing the analyser does, such as dsp.js. But you could do it yourself as well. Or maybe there is a way to do it using the web audio api directly, but from what I understand, the function generating the frequency analysis isn't exposed. Which means that apart from onprocess you can't. But maybe there is a way, I just don't know it. – Julien Grégoire May 23 '18 at 14:21
  • Regardless of which browser does it "correctly" is there a recommendation to make Safari work more like Chrome? – rwwagner90 Jul 06 '20 at 19:36