Environment: Docker, Ubuntu 20.04, OpenCV 3.5.4, FFmpeg 4.2.4
Im currently reading the output of a cv2.VideoCapture
session using the CV_FFMPEG
backend and successfully writing that back out in real time to a file using cv2.VideoWriter
. The reason I am doing this is to drawing bounding boxes on the input and saving it to a new output.
The problem is I am doing this in a headless environment (Docker container). And I’d like to view what's being written to cv2.VideoWriter
in realtime.
I know there are ways to pass my display through using XQuartz for example so I could use cv2.imshow
. But what I really want to do is write those frames to an RTSP Server. So not only my host can "watch" but also other hosts could watch too.
After the video is released I can easily stream the video to my RTSP Server using this command.
ffmpeg -re -stream_loop -1 -i output.mp4 -c copy -f rtsp rtsp://rtsp_server_host:8554/stream
Is there anyway to pipe the frames as they come in to the above command? Can cv2.VideoWriter
itself write frames to an RTSP Server?
Any ideas would be much appreciated! Thank you.