I'm trying to find a way to create an FFT from audio data included in WebRTC stream. I have a media_recorder created following examples from here https://webrtc.github.io/samples/src/content/getusermedia/record/ but using my audio/video stream received over webRTC. My aim is to create a Fourier spectrum to show the buzzing of the bees. My thoughts are that I need an audio in .wav format for that. I have tried a variety of APIs trying to extract an audio track from audio/video stream - all of them giving errors. For example "The getAudioTracks() method of the MediaStream interface returns a sequence that represents all the MediaStreamTrack objects in this stream's track set where MediaStreamTrack.kind is audio." - https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getAudioTracks doesn't work for me (or I am using it wrongly) I need to stream video and audio for other purpose and then process audio either on the fly or after recording. Any advice?
Asked
Active
Viewed 459 times
0
-
If you want to do this in realtime, you can actually use the Web Audio API for this, and an AnalyserNode. – Brad Nov 26 '21 at 05:40
-
Thanks. I will study and try it. For now I'm struggling to extract audio from webrtc stream. – janek Nov 28 '21 at 23:43
-
Consume your webrtc stream by ffmpeg and do what you need with ffmpeg - you can convert to .wav or even create FFT with ffmpeg: https://stackoverflow.com/questions/59865405/use-webrtc-getusermedia-stream-as-input-for-ffmpeg – user1390208 Nov 30 '21 at 15:51