0

I want to broadcast live music from a server to around 100 mobile phone clients in a local area network. The goal is a setup known from silent-discos, but over IP with mobile phones as receivers. The listeners should use headphones, no perfect sync is required. A delay of 1-3 sec would be acceptable.

My first setup used Icecast (TCP based), which lead to a good music quality but high delay (4-50 sec). My second approach uses Janus WebRTC server (with streaming plugin) that achieves sub-second delay, but the audio quality is only medium (optimized for voice, no consistent playback speed).

I found this issue describing an SRT server that supports multiple client connections.

Should I optimize my Janus/WebRTC approach for music, or try to build a solution with SRT, or is there an even better protocol/solution?

Anonymous Coward
  • 1,096
  • 11
  • 22
Fabian
  • 23
  • 4

1 Answers1

0

I would recommend WebRTC. You can pull the feed in a browser, then you don't need to install a client on all those 100 phones.


How are you publishing the audio for WebRTC (to your Janus server)? If possible I would use a WebRTC Agent where you have greater control.

Janus using the streaming plugin provides a really easy way to publish via GStreamer or ffmpeg. You can get greater control over the audio quality that way.

Sean DuBois
  • 3,972
  • 1
  • 11
  • 22
  • 1
    Thanks for your answer! I am publishing the audio via GStreamer, so I will focus on optimising WebRTC audio transmission for music instead of voice. My initial idea for an SRT stream was to 'force' all listeners to install the VLC player app on their phones. – Fabian Jan 05 '22 at 18:04
  • Mind sharing your GStreamer pipeline? I think the only thing you will need to tweak is the audio parameters. I am not familiar with libwebrtc and if it is doing any post-decode processing that could be lowering quality :/ – Sean DuBois Jan 06 '22 at 15:36
  • 1
    Sure! My GStreamer pipeline is: gst-launch-1.0 \ osxaudiosrc device=60 ! \ audioresample quality=10 ! audio/x-raw,channels=2,rate=48000 ! \ opusenc bandwidth=fullband frame-size=60 bitrate=64000 ! \ rtpopuspay ! udpsink host=127.0.0.1 port=5002 \ – Fabian Jan 06 '22 at 18:34
  • 1
    It turned out that the demo code of the Janus WebRTC streaming plugin did not output stereo sound, so the OPUS stereo audio was muxed to mono, which sounded muffled. I managed to alter the negotiated SDP (SDP munging) in the streaming plugin JS demo code to support stereo, the sound quality is now awesome! – Fabian Jan 08 '22 at 11:19