0

I am trying to run a pipeline that will read an h264/aac stream from RTSP and push it to an FFmpeg TCP socket (that FFmpeg instance will re-publish as another RTSP stream, I know it's odd). Requirements:

  • The RTSP client at the start of this pipeline MUST be GStreamer (i.e. I can't use FFmpeg to read the RTSP stream, only to publish).
  • The RTSP client at the end of the pipeline MUST be FFmpeg.

My pipeline works for video but adding audio has been a challenge. Here's the current GStreamer pipeline:

gst-launch-1.0 rtspsrc location=rtsp://{camIp}/live name=cam \
  cam. ! rtph264depay ! h264parse ! queue ! mux. \
  cam. ! rtpmp4gdepay ! aacparse ! queue ! mux. \
  mpegtsmux name=mux ! tcpclientsink host=${ip} port=${port} sync=false

Then, FFmpeg is listening like this:

ffmpeg -f mpegts -i tcp://${ip}:${port} \
  -rtsp_transport tcp -vcodec copy -an -f rtsp rtsp://${rtspIp}:${rtspPort}/d8a5bb19e63326d7

This pipeline works if I remove the audio by removing cam. ! rtpmp4gdepay ! aacparse ! queue ! mux. in my GStreamer chain. However, with audio added FFmpeg won't publish the data to my RTSP client; FFmpeg starts probing the GStreamer output and then exits for no apparent reason.

Brian Schrameck
  • 588
  • 1
  • 5
  • 22

0 Answers0