12

I brought this up in my last post but since it was off topic from the original question I'm posting it separately. I'm having trouble with getting my transmitted audio to play back through Web Audio the same way it would sound in a media player. I have tried 2 different transmission protocols, binaryjs and socketio, and neither make a difference when trying to play through Web Audio. To rule out the transportation of the audio data being the issue I created an example that sends the data back to the server after it's received from the client and dumps the return to stdout. Piping that into VLC results in a listening experience that you would expect to hear.

To hear the results when playing through vlc, which sounds the way it should, run the example at https://github.com/grkblood13/web-audio-stream/tree/master/vlc using the following command:

$ node webaudio_vlc_svr.js | vlc -

For whatever reason though when I try to play this same audio data through Web Audio it fails miserably. The results are random noises with large gaps of silence in between.

What is wrong with the following code that is making the playback sound so bad?

window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];

client.on('stream', function(stream, meta){
    stream.on('data', function(data) {
        context.decodeAudioData(data, function(buffer) {
            audioStack.push(buffer);
            if (audioStack.length > 10 && init == 0) { init++; playBuffer(); }
        }, function(err) {
            console.log("err(decodeAudioData): "+err);
        });
    });
});

function playBuffer() {
    var buffer = audioStack.shift();
    setTimeout( function() {
            var source    = context.createBufferSource();
            source.buffer = buffer;
            source.connect(context.destination);
            source.start(context.currentTime);
            delayTime=source.buffer.duration*1000; // Make the next buffer wait the length of the last buffer before being played
            playBuffer();
    }, delayTime);
}

Full source: https://github.com/grkblood13/web-audio-stream/tree/master/binaryjs

Flip
  • 6,233
  • 7
  • 46
  • 75
Brad.Smith
  • 1,071
  • 3
  • 14
  • 28
  • 1
    This question is a little bit old, but I tested the code on your repository with no success, do you have any updates? – Keyne Viana Dec 01 '17 at 19:47

1 Answers1

12

You really can't just call source.start(audioContext.currentTime) like that.

setTimeout() has a long and imprecise latency - other main-thread stuff can be going on, so your setTimeout() calls can be delayed by milliseconds, even tens of milliseconds (by garbage collection, JS execution, layout...) Your code is trying to immediately play audio - which needs to be started within about 0.02ms accuracy to not glitch - on a timer that has tens of milliseconds of imprecision.

The whole point of the web audio system is that the audio scheduler works in a separate high-priority thread, and you can pre-schedule audio (starts, stops, and audioparam changes) at very high accuracy. You should rewrite your system to:

1) track when the first block was scheduled in audiocontext time - and DON'T schedule the first block immediately, give some latency so your network can hopefully keep up.

2) schedule each successive block received in the future based on its "next block" timing.

e.g. (note I haven't tested this code, this is off the top of my head):

window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;

client.on('stream', function(stream, meta){
    stream.on('data', function(data) {
        context.decodeAudioData(data, function(buffer) {
            audioStack.push(buffer);
            if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
                init++;
                scheduleBuffers();
            }
        }, function(err) {
            console.log("err(decodeAudioData): "+err);
        });
    });
});

function scheduleBuffers() {
    while ( audioStack.length) {
        var buffer = audioStack.shift();
        var source    = context.createBufferSource();
        source.buffer = buffer;
        source.connect(context.destination);
        if (nextTime == 0)
            nextTime = context.currentTime + 0.05;  /// add 50ms latency to work well across systems - tune this if you like
        source.start(nextTime);
        nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
    };
}
cwilso
  • 13,610
  • 1
  • 30
  • 35
  • 1
    Thanks cwilso but this code too is very glitchy sounding. Maybe it's my browser that is causing the issues. My source is also a continuous live stream so I'm not sure if that matters. I don't think it would. – Brad.Smith Dec 09 '13 at 19:22
  • So I've tried this with a couple of different setups and the results are the same every time. Here's an audio snippet of what I'm hearing using the example source code with this updated client code. It's a 22 second ogg file. https://drive.google.com/file/d/0B8vThR7JhLk_R2owSWp4eVNwR1E/edit?usp=sharing – Brad.Smith Dec 09 '13 at 21:41
  • That doesn't seem glitchy - it seems like you're not getting all the data, or it's getting scheduled in the past, or something else really bad. I'd start console.log()ing diagnostics like crazy, make sure you're getting all the packets, that they're getting scheduled in order, and check against the currentTime to make sure there's enough latency. – cwilso Dec 16 '13 at 21:31
  • 1
    I figured out the problem I was having. When using decodeAudioData for some reason I can't send one chunk at a time for playback. To fix this I'm concatting 40 chunks at a time on the server before I send to the client. – Brad.Smith Dec 17 '13 at 20:52
  • 2
    Brad, can you show your working code? I'm also having latency issues.. Thanks! – olealgo Feb 01 '14 at 13:56
  • Any alternative to decodeAudioData that can work on separate frames? Media Source Extensions are not supported on iOS. – pablo Apr 15 '15 at 10:19
  • this is almost working, except a glitchy sound; i guess the scheduling could be improved ? any idea ? – jujule Jul 26 '16 at 00:48
  • The binjs-decaud example on my github has the concatted version that I mentioned. I don't use binjs anymore wince socketio has supported binary data for a while now but the example is still relevant. – Brad.Smith Oct 12 '16 at 14:53