2

I have a GStreamer pipeline with an appsink

filesrc location=test.mp4 ! decodebin ! video/x-raw ! queue max-size-bytes=0 max-size-time=100000000 ! appsink name=appSink sync=false max-buffers=1 drop=false

I pull a sample from the appsink then I get the buffer, map it read-only and store the map info (to access raw memory later).

sample: Gst.Sample = self.__sink.pull_sample()
self.__buffer: Gst.Buffer = sample.get_buffer()
self.__buffer_map: Gst.MapInfo = self.__buffer.map(Gst.MapFlags.READ)

Then I would like to use the same data (without copying it) in an output pipeline for e.g.:

appsrc name=appSrc block=true ! video/x-raw,format=(string)NV12,width=1920,height=1080,framerate=30/1 ! videoconvert ! ximagesink

To do this it is the best I could come up with:

shared_buffer_memory = self.__buffer.get_all_memory()
buf = Gst.Buffer.new()
buf.insert_memory(-1, shared_buffer_memory)
self.__src.push_buffer(buf)
#sleep(0.05) 

If I do this I see the first frame correctly but all the other frames are green. It is fine, because the underlying memory of buf goes out of scope and freed meanwhile. If I put sleep(0.05) after push_buffer it displays all of the frames, but I think the memory behind the buffer is double-freed:

** (python3.9:6745): CRITICAL **: 09:23:54.645: gst_vaapi_image_unmap: assertion 'image != NULL' failed

I would like to get the frames from the appsink then push them into output pipelines without copy. What would be the best approach to do this? (a single pipeline is not suitable in my case).

Broothy
  • 659
  • 5
  • 20

2 Answers2

0

The answer is not mine, I got it on the #gstreamer IRC channel:

The documentation says the following:

AppSrc.push_buffer(buffer): Adds a buffer to the queue of buffers that the appsrc element will push to its source pad. This function takes ownership of the buffer.

This was what misled me. In Python one should easily push the same buffer to multiple AppSrcs.

Broothy
  • 659
  • 5
  • 20
0

It seems like it is a thread that you want to complete before moving on with the code, like:

from threading import Thread
from time import sleep

def a():
    print('start')
    sleep(1)
    print('middle')
    sleep(1)
    print('end')
c = Thread(target=a)
print('Hello')
c.start()
print('Bye')
sleep(3)

Output:

Hello
startBye

middle
end

Say c is the thread, you can find a way to add c.join() to your code at a certain line so the program will know to wait for the process to finish before continuing the program:

from threading import Thread
from time import sleep

def a():
    print('start')
    sleep(1)
    print('middle')
    sleep(1)
    print('end')
c = Thread(target=a)
print('Hello')
c.start()
a.join()
print('Bye')
sleep(3)

Output:

Hello
start
middle
end
Bye

UPDATE

Looking at the structure of your code:

shared_buffer_memory = self.__buffer.get_all_memory()
buf = Gst.Buffer.new()
buf.insert_memory(-1, shared_buffer_memory)
self.__src.push_buffer(buf)
#sleep(0.05) 

You say that it works with sleep(0.05), but it doesn't work properly without it. The reason that may happen is because of a thread. In order for it to work, you'll need to reverse the effects of the thread, by adding .join() somewhere in your code.

Red
  • 26,798
  • 7
  • 36
  • 58
  • Dear Ann, are you sure you posted your answer to the right thread? I do not see the connection between my question and your answer. – Broothy Nov 19 '20 at 14:15
  • @Broothy Edited to explain the connections. – Red Nov 19 '20 at 15:07
  • Dear Ann, thank you for the clarification. It is a thread synchronization issue under the hood, you are right. But that thread is a GStreamer internal, cannot be accessed outside the lib. The thread in question continues to run after I perform the `self.__src.push_buffer(buf)` call. A callback or other synchronization point would be helpful, but GStreamer does not offer such a mechanism. It is a very GStreamer specific problem, it cannot be solved with the mentioned method. – Broothy Nov 19 '20 at 15:25