1

What is the correct way of using splitmuxsink in a dynamic pipeline?

Previously I have used filesink to record (no problem what so ever) but there is requirement to save the file in segments so I have tried to use splitmuxsink in dynamic pipeline(there is async time in recording). In doing so I have faced two problems

  1. when I tried to stop the recording, I use a idle pad to block the recording queue and launch a callback function to do steps to delink the recording branch (send eos, set elements in recording bin to NULL, then dequeue the bin). I have set a downstream data probe to notify me that the eos has reached the splitmuxsink sink before I tried to do step 2..(set elemets to null)

    However, the end result is that i still have an empty last file (o bytes). It seem that the pipe is not yet closed or having some problem. I did a workaround to split the video immediately when the record stop (though I lost a few frames)

    How should one stop in a dynamic branch?

  2. When I tried to create the recording bin when i start the recording(utilizing the pad-added signal when a pad is created to connect the recording bin). Previously I have created the recording bin in normal sequence (not creating them during the glib loop that I have created). The previous step work ok but the present step has the splitmuxsink's filesink in a locked state

    How should I workaround this? What causes the lock state?

Here is my code

/// create record bin
static void
pad-added(GstElement * self,
               GstPad * new_pad,
               gpointer user_data)
{
    char* pad_name = gst_pad_get_name(new_pad);
    if(g_str_equal(pad_name,"src"))
    {

        //RECORD records;
        records.recording    = gst_bin_new("recording");
        records.queue       = gst_element_factory_make("queue","queue");
        records.enc         = gst_element_factory_make("vpuenc_h264","enc");
        records.parser       = gst_element_factory_make("h264parse","parser");
        records.sink        = gst_element_factory_make("splitmuxsink","sink");


        // Add it to the main pipeline
        gst_bin_add_many(GST_BIN(records.recording),
                                 records.queue,
                                 records.enc,
                                 records.parser,
                                 records.sink,NULL);

        // link up the recording elements queue
        gst_element_link_many(records.queue,
                                  records.enc,
                                  records.parser,
                                  records.sink,NULL)
    
        
        g_object_set(G_OBJECT(records.fsink),
                     //"location","video_%d.mp4",
                     "max-size-time", (guint64) 10L * GST_SECOND,
                     "async-handling", TRUE,
                     "async-finalize", TRUE,
                     NULL);

        records.queue_sink_pad = gst_element_get_static_pad (records.queue, "sink");
        records.ghost_pad = gst_ghost_pad_new ("sink", records.queue_sink_pad);
        gst_pad_set_active(records.ghost_pad, TRUE);

        gst_element_add_pad(GST_ELEMENT(records.recording),records.ghost_pad);

        g_signal_connect (records.sink, "format-location",
                         (GCallback)format_location_callback,
                         &records);
        
    }
}

gboolean cmd_loop()
{
   // other cmd not shown here

   if(RECORD)
   {
      //create tee sink pad
      // this step will trigger the pad-added function
      tee_sink_pad = gst_element_get_request_pad (tee,"src");

      // ....other function
   }
} 

int main()
{
   // add the pad-added signal response
   g_signal_connect(tee, "pad-added", G_CALLBACK(pad-added), NULL);

  // use to construct the loop (cycle every 1s)
   GSource* source = g_timeout_source_new(1000);

   // set function to watch for command 
   g_source_set_callback(source,
                          (GSourceFunc)cmd_loop,
                          NULL,
                          NULL);
}

enter image description here

user1538798
  • 1,075
  • 3
  • 17
  • 42
  • 1
    Have you tried sending the EOS event directly to your encoder? I had faced a similar issue with splitmuxsink. So I had to send an EOS event to the Encoder and the process exit without getting locked and my video was find as well. A quick hack can be to change your splitmuxsink's muxer to `matroskamux`. With that your last file will still be readable. (This is not a solution but a workaround) – votelessbubble Apr 12 '22 at 09:05
  • @marmikshah thanks for your answer.. will try it out and report on how it goes – user1538798 Apr 12 '22 at 09:12
  • @votelessbubble.. hi just some result thus far... matroskamux does not seem to work for me... it actually crashed the pipeline if i g_object_set(splitmuxsink, "muxer-factory", "matroskamux", NULL); the sending of EOS to encoder I still am unable to see the EOS message of the element n the bus... but somehow i can do a loop with the above code(meaning run a few cycles on record on/off until the pipeline craashed with the muxer(i think) complaining about it unable to multiplex stream) – user1538798 Apr 18 '22 at 13:01
  • Oh sorry I should've mentioned earlier. Please use the muxer property and not muxer-factory. https://gstreamer.freedesktop.org/documentation/multifile/splitmuxsink.html?gi-language=python#splitmuxsink:muxer – votelessbubble Apr 19 '22 at 01:58
  • If you use muxer-factory, then you will need to set async-finalize property to True. I do not have many observations with these property combinations. – votelessbubble Apr 19 '22 at 01:59
  • I think the muxer problem is due to that there is buffer still in the muxer and unable to be flush out – user1538798 Apr 19 '22 at 02:05
  • 1
    Yes you're right. Matroskamux will simply make your video file playable even if your pipeline exits non-gracefully. I have a python implementation of sending EOS to the pipeline incase you want to refer. – votelessbubble Apr 19 '22 at 04:07
  • So, firstly I send an EOS to the pipeline using `pipeline.send_event(Gst.Event.new_eos())`, next I explicitly send EOS to any encoder that I have `encoder.send_event(Gst.Event.new_eos())`. This is especially useful when you're using CPU based encoders. Finally, I send an EOS message to the Bus using `bus_callback(bus, bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS))` bus_callback here is my function to handle all bus messages. – votelessbubble Apr 19 '22 at 04:10
  • @votelessbubble i am a little puzzle over why you have to send eos to the entire pipeline when an eos over a branch suffices (I need to record on and off dynamically). As such is there a sequence of placing probe (since I need two down here, one one the dynamic branch, another on the splitmuxsink) so that the exit of one would not cause the next to exit as well? – user1538798 Apr 19 '22 at 04:47
  • So in my case, the user can actually configure the pipeline with deepstream specific elements and choose CPU/HW based encoders. When all my encoders are GPU/HW based, sending EOS to pipeline will gracefully exit. But in cases when I have CPU based encoders, the pipeline simply freezes on EOS. So I need to explicitly send EOS to those elements. And the user is free to choose from 4-5 different kinds of sink also. Some of them do not have encoders in it. So sending EOS to pipeline will ensure that the pipeline will exit when those kinds of sinks are used. – votelessbubble Apr 19 '22 at 04:59

0 Answers0