7

I want to stream the audio of my microphone (that is being recorded via pyaudio) via Flask to any client that connects.

This is where the audio comes from:

    def getSound(self):
        # Current chunk of audio data
        data = self.stream.read(self.CHUNK)
        self.frames.append(data)
        wave = self.save(list(self.frames))

        return data

Here's my flask-code:

@app.route('/audiofeed')
def audiofeed():
    def gen(microphone):
        while True:
            sound = microphone.getSound()
            #with open('tmp.wav', 'rb') as myfile:
            #   yield myfile.read()

            yield sound

    return Response(stream_with_context(gen(Microphone())))

And this is the client:

    <audio controls>
        <source src="{{ url_for('audiofeed') }}" type="audio/x-wav;codec=pcm">
        Your browser does not support the audio element.
    </audio>

It does work sometimes, but most of the times I'm getting "[Errno 32] Broken pipe"

When uncommenting that with open("tmp.wav")-part (the self.save() optionally takes all previous frames and saves them in tmp.wav), I kind of get a stream, but all that comes out of the speakers is a "clicking"-noise.

I'm open for any suggestions. How do I get the input of my microphone live-streamed (no pre-recording!) to a webbrowser?

Thanks!

paranerd
  • 81
  • 1
  • 1
  • 5

4 Answers4

3

Try This its worked for me. shell cmd "cat" is working perfect see the code iam using FLASK

import subprocess
import os
import inspect
from flask import Flask
from flask import Response

@app.route('/playaudio')
    def playaudio():
        sendFileName=""
        def generate():

            #  get_list_all_files_name this function gives all internal files inside the folder

   filesAudios=get_list_all_files_name(currentDir+"/streamingAudios/1")

            # audioPath is audio file path in system 
            for audioPath in filesAudios:
                data=subprocess.check_output(['cat',audioPath])
                yield data
        return Response(generate(), mimetype='audio/mp3')
Shantanu Sharma
  • 666
  • 6
  • 21
  • Thank you, that works great with an existing mp3 and might come in handy as well, but right now I'm trying to live stream the microphone. I thought about continuously writing the live data to a wav-file and then stream that somehow - any suggestions on that one? – paranerd Mar 04 '19 at 18:02
  • Please View this link : https://developers.google.com/web/fundamentals/media/recording-audio/ it help you lot, read Full document at last in document you can convert data to WAV – Shantanu Sharma Mar 09 '19 at 11:11
  • Thanks for the link, but as I understand it, they're taking the input from the user's mic, then convert it to WAV and send it to the server. I'm actually trying to do the opposite. – paranerd Mar 10 '19 at 13:16
  • You have to use "apache kafka" with python this is using in global level..... All type of live streaming handle by "apache kafka". – Shantanu Sharma Mar 12 '19 at 14:04
  • use this open source github code for any tye of steaming... https://github.com/umbrashia/power-streaming-project – Shantanu Sharma Apr 27 '19 at 19:01
2

This question was asked long time ago, but since I spent entire day to figure out how to implement the same, I want to give the answer. Maybe it will be helpful for somebody.

"[Errno 32] Broken pipe" error comes from the fact that client can not play audio and closes this stream. Audio can not be played due to absence of the header in the data stream. You can easily create the header using genHeader(sampleRate, bitsPerSample, channels, samples) function from the code here . This header has to be attached at least to the first chunck of sent data ( chunck=header+data ). Pay attention, that audio can be played ONLY untill client reaches file size in download that you have to specify in the header. So, workaround would be to set in the header some big files size, e.g. 2Gb.

  • Thanks for taking the time! Where would I put that chunk=header+data part in my example? Do you know how to set the file size in the header? – paranerd Jun 28 '18 at 08:54
2

Instead of datasize = len(samples) * channels * bitsPerSample in the header function write datasize = 2000*10**6.

def gen_audio():

    CHUNK = 512
    sampleRate = 44100
    bitsPerSample = 16
    channels = 2
    wav_header = genHeader(sampleRate, bitsPerSample, channels)

    audio = AudioRead()
    data = audio.get_audio_chunck()
    chunck = wav_header + data
    while True:
        yield (chunck)
        data = audio.get_audio_chunck()
        chunck = data
1

After lots research and tinkering I finally found the solution.

Basically it came down to serving pyaudio.paFloat32 audio data through WebSockets using Flask's SocketIO implementation and receiving/playing the data in JavaScript using HTML5's AudioContext.

As this is requires quite some code, I think it would not be a good idea to post it all here. Instead, feel free to check out the project I'm using it in: simpleCam

The relevant code is in: - noise_detector.py (recording) - server.py (WebSocket transfer) - static/js/player.js (receiving/playing)

Thanks everyone for the support!

paranerd
  • 81
  • 1
  • 1
  • 5