Need your opinions on architecture for the live audio streaming app.
Currently, I'm testing it on a local network, and everything seems to work, but I have doubts about how good it will be in production.
My architecture:
1 2
Broadcaster HTTP client ---> App Server ---> Listening clients (React.js App)
1
— communication over HTTP, 2
— communication over HTTP and WebSocket
What I want to do:
- When the user opens my React App and the Broadcaster is not streaming yet, React should display something like "OFFLINE"
- Next, when the Broadcaster starts streaming to App Server, React App should display "The stream is started" and automatically start the playback.
- Finally, when the Broadcaster stops streaming, React App should display "OFFLINE" again.
How I currently do it: My App server uses two protocols: HTTP (for audio streaming and other stuff) and WebSocket (only for sending JSON status messages of what happens on the server).
- When The Broadcaster starts streaming to the App Server (over HTTP), the App Server sends the WebSocket message to React App: "the stream has started, you can access it at
http://my-domain/stream
i.e. the App Servers streams the audio to React over regular HTTP. - React App sees this message and renders HTML
<audio>
element and starts playing the audio. - When the Broadcaster has stopped streaming, App Server sends WebSocket message to React App "the stream is finished" and React hides the player, displaying "OFFLINE"
So, I do all streaming (both from Broadcaster to App Server and from App Server to React client) over HTTP and use WebSocket to communicate real-time stream state updates.
How good is this architecture?