Platform: Jetson Nano B01, OS: Ubuntu 18.04, Camera module: Raspi cam v2.1 IMX219 (CSI interface)
Problem overview: My team is developing a machine vision application that requires recording video at high fps (>=120hz) and doing live inference on the same video at low fps (~2hz). Is there a Gstreamer element we could use that could pull out a frame from the pipeline at set intervals and save it to disk?
Current Gstreamer pipeline: gst-launch-1.0 nvarguscamerasrc num-buffers=-1 gainrange="1 1" ispdigitalgainrange="2 2" ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=120/1, format=NV12' ! omxh264enc ! qtmux ! filesink location=test1.mp4 -e
Additional info: The idea is that we have a function looping continuously checking for a new image file at a specific location, and when it detects a new image file it will send this to the neural net for inference and delete the image file. We were able to achieve moderate success at this task using a multi-threaded approach to recording with OpenCV on x86 machines, but the Jetson Nano doesn't have enough cpu power to meet our needs with OpenCV, afaik.
The pipeline provided above is able to record videos that meet our required specs, but does not save images to be used for inference.