I am using gstreamer (gst-launch) to capture camera and save the stream as both video and image frame. The problem of the pipeline is, when the pipeline finishes (by interrupt) the video record, it can not support position tracking and seeking. Hence, video is played in vlc player with unknown lenght. I think problem is at the pipeline itself. How can we achieve to support position tracking and seeking.
Here below you can see the gstreamer pipeline code:
gst-launch -v --gst-debug-level=0 \
v4l2src device=/dev/video0 \
! videorate \
! video/x-raw-yuv, width=320, height=240, framerate=5/1 \
! tee name=tp
tp. \
! queue \
! videobalance saturation=0.0 \
! textoverlay halign=left valign=top text="(c)PARK ON OM " shaded-background=true \
! clockoverlay halign=right valign=top time-format="%D %T " text="Date:" shaded-background=true \
! queue \
! ffmpegcolorspace \
! ffenc_mpeg4 \
! avimux \
! filesink location=/ram/pmc/recordCAM1.mp4 \
tp. \
! queue \
! jpegenc \
! multifilesink location=/ram/pmc/webcam1.jpeg &
The explanation of the pipeline is like below:
______________ ________ _______ ________________
|convert into|->|append|->|encode| -> |save file as |
_________________ ________________ _____________ | grayscale | |text | |ffenc | | recordCAM1.mp4 |
| use /dev/video |-> |set framerate |-> |multiplexer|->
| as input source| |and resolution| | named tp |->
__________ _________________
|jpeg enc|->|save to filesink|
| encode | | as jpeg |
at the end both mux output saves files to disk. What should I add to the pipeline to achieve position tracking at any media player.
Regards