0

In developing a streaming audio application I used the gst-launch-1.0 command-line tool to generate an MPEG Transport stream for testing. This worked as intended (I was able to serve the stream from a simple http server and hear it using VLC media player). I then tried to replicate the encoding part of that stream in Python gstreamer code. The Python version connected to the server ok, but no audio could be heard. I'm trying to understand why the command-line implementation worked, but the Python one did not. I am working on Mac OS 10.11 and Python 2.7.

The command line that worked was as follows:

gst-launch-1.0 audiotestsrc freq=1000 ! avenc_aac ! aacparse ! mpegtsmux ! tcpclientsink host=127.0.0.1 port=9999

The Python code that created the gstreamer pipeline is below. It instantiated without producing any errors and it connected successfully to the http server, but no sound could be heard through VLC. I verified that the AppSrc in the Python code was working, by using it with a separate gstreamer pipeline that played the audio directly. This worked fine.

def create_mpeg2_pipeline():
     play = Gst.Pipeline()
     src = GstApp.AppSrc(format=Gst.Format.TIME, emit_signals=True)
     src.connect('need-data', need_data, samples())  # need_data and samples defined elsewhere
     play.add(src)

     capsFilterOne = Gst.ElementFactory.make('capsfilter', 'capsFilterOne')
     capsFilterOne.props.caps = Gst.Caps('audio/x-raw, format=(string)S16LE, rate=(int)44100, channels=(int)2')
     play.add(capsFilterOne)
     src.link(capsFilterOne)

     audioConvert = Gst.ElementFactory.make('audioconvert', 'audioConvert')
     play.add(audioConvert)
     capsFilterOne.link(audioConvert)

     capsFilterTwo = Gst.ElementFactory.make('capsfilter', 'capsFilterTwo')
     capsFilterTwo.props.caps = Gst.Caps('audio/x-raw, format=(string)F32LE, rate=(int)44100, channels=(int)2')
     play.add(capsFilterTwo)
     audioConvert.link(capsFilterTwo)

     aacEncoder = Gst.ElementFactory.make('avenc_aac', 'aacEncoder')
     play.add(aacEncoder)
     capsFilterTwo.link(aacEncoder)

     aacParser = Gst.ElementFactory.make('aacparse', 'aacParser')
     play.add(aacParser)
     aacEncoder.link(aacParser)

     mpegTransportStreamMuxer = Gst.ElementFactory.make('mpegtsmux', 'mpegTransportStreamMuxer')
     play.add(mpegTransportStreamMuxer)
     aacParser.link(mpegTransportStreamMuxer)

     tcpClientSink = Gst.ElementFactory.make('tcpclientsink', 'tcpClientSink')
     tcpClientSink.set_property('host', '127.0.0.1')
     tcpClientSink.set_property('port', 9999)
     play.add(tcpClientSink)
     mpegTransportStreamMuxer.link(tcpClientSink)

My question is, how does the gstreamer pipeline that I've implemented in Python differ from the command-line pipeline? And more generally, how do you DEBUG this sort of thing? Does gstreamer have any 'verbose' mode?

Thanks.

Rob Goon
  • 51
  • 3

1 Answers1

0

One question at a time:

1) How does it differ from gst-launch-1.0? It is hard to tell without seeing your full code but I'll try to guess: gst-launch-1.0 does proper pad linking. When you have a muxer like you do you can't directly link it as it is created without any sink pads. You need to request one to be created before you can link. Take a look at dynamic pads: https://gstreamer.freedesktop.org/documentation/application-development/basics/pads.html

Also, gst-launch-1.0 has error handling, so it checks that every action succeeded and otherwise reports an error. I'd recommend you add a GstBus message handler to get notified of error messages at least. Also you should check the return for the functions you call in GStreamer, that would allow you to catch this linking error in your program.

2) Gstreamer debugging? Mostly done by setting the GST_DEBUG variable: https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html#the-debug-log

Run your application with: GST_DEBUG=6 ./yourapplication and you should see lots of logging.

thiagoss
  • 2,034
  • 13
  • 8