I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats. Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
- This works, but on initiation it alerts "[mpegts @ 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
- I'm using
-vcodec copy
because i've already verified it in another rtp stream in which-acodec copy
didn't work. This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954': Duration: N/A, start: 8.956122, bitrate: N/A Program 1 Metadata: service_name : Service01 service_provider: FFmpeg Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006) Output #0, wav, to 'output.wav': Output file #0 does not contain any stream
- I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance: One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.