I saw this page resample audio buffer from 44100 to 16000 using offlineAudioContext to resample audio from a fixed buffer. Is there a way to resample the audio from a stream? What I would like to do is to capture audio from microphone, resample it to a low bit rate and upload to our server in real time.
Asked
Active
Viewed 1,450 times
0
-
Is there a reason you want to use such a slow sample rate? If you're just trying to reduce bandwidth, a reduction in bit depth is a far better idea. The Web Audio API uses float32 samples. You can get away with 8-bit samples at 44.1 kHz a lot better than you can with 16-bit samples at 16 kHz. – Brad Apr 12 '16 at 01:37
-
Yes. The devices talking to the browser only knows 8K sample rate encoded in ADPMC, and that's 4 bits per sample. On top of that, the environment we are in have a few hundreds users uploading real time audios (speeches only) to cloud servers simultaneously almost 24hr a day. So yeah, low bandwidth is definitely a requirement. Good idea on the 8-bit sample. We can downsample it on the server side in real time and it makes the job much easier as most of my team is made up of C, C++ people. – user2600798 Apr 12 '16 at 13:56
-
Do your developers know about Emscripten? https://kripken.github.io/emscripten-site/ – Brad Apr 12 '16 at 16:17
-
No. Good to know. Thanks. – user2600798 Apr 12 '16 at 17:04
1 Answers
0
What you need to do is create a ScriptProcessorNode and then resample the buffers as your callback is called with them.
var scriptNode = context.createScriptProcessor(4096, 1, 1);
scriptNode.onaudioprocess = function onAudioProcess(e) {
// e.inputBuffer contains what you want
};

Brad
- 159,648
- 54
- 349
- 530
-
Thanks for your response. Do I have to have ALL the sample data before I can do this? I am trying to do this is in real time, not recording 30 minutes of audio then resample. Also, the webrtc spec does not indicate how often the call back occurs. – user2600798 Apr 12 '16 at 01:55
-
@user2600798 No, read the documentation I linked to for ScriptProcessorNode. The first parameter when creating it indicates the buffer size in samples. By setting it to 4,096, the callback will be fired every 4,096 samples. I find that 2,048 and 4,096 are good tradeoffs for latency vs. performance for most general uses. You can lower it if you need to, or raise it if appropriate. The WebRTC spec has nothing to do with this... this is the Web Audio API. – Brad Apr 12 '16 at 03:47
-
Thanks for clearing it up. But this can only do one resample at a time. The information from the end of one packet does not get carried to the next packet so there is a lot of abnormally in the end audio. – user2600798 Apr 12 '16 at 14:01
-
@user2600798 What do you mean? You need to stitch the audio back together later. – Brad Apr 12 '16 at 16:16