0

I have a python client code that receives the video stream transmitted using VLC or OBS Studio software.

Client code:

import cv2
import time
target_url = 'udp://@0.0.0.0:1235'
stream = cv2.VideoCapture(target_url)
while True:
        r, f = stream.read()
        if r:
            cv2.imshow('IP Camera stream',f)

It is able to read and display the video stream transmitted using VLC from another machine. Now I want to create the video server app instead of using VLC. I tried to use cv2.VideoWriter but it only takes local files and not udpsink. After browsing through the net, I got few stackoverflow answers suggesting pyzmq [Ref 1] which uses TCP, manually creating socket and handling it [Ref 2] which is not going to work because the client should be able to receive from both VLC as well as custom app.

Then I got to know about NetGear [Ref 3], which is a great tool. But it doesn't support UDP as it internally uses pyzmq [Ref 4].

Basically I am looking for something like cv2.VideoWriter('udp://192.168.1.2:5000', fourcc, ..).

Question: Is there a way in which the live camera feed can be converted into H264 with bitrate and fps then transmitted over UDP so that it can be received using cv2.VideoCapture('udp://@0.0.0.0:5000')?

[Ref 1] Python Opencv and Sockets - Streaming video encoded in h264

[Ref 2] https://stackoverflow.com/a/63717263/12455023

[Ref 3] https://stackoverflow.com/a/57204835/12455023

[Ref 4] https://github.com/abhiTronix/vidgear/issues/281

1 Answers1

2

I'd suggest to use gstreamer for this. You may try:

#!/usr/bin/env python

import cv2
print(cv2.__version__)

# Uncommenting this would allow to check if your opencv build has GSTREAMER support 
#print(cv2.getBuildInformation())


cap = cv2.VideoCapture("udpsrc port=5000 ! application/x-rtp,media=video,encoding-name=H264 ! queue ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,format=BGR ! queue ! appsink drop=1", cv2.CAP_GSTREAMER)

# For NVIDIA using NVMM memory 
#cap = cv2.VideoCapture("udpsrc port=5000 ! application/x-rtp,media=video,encoding-name=H264 ! queue ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! queue ! appsink drop=1", cv2.CAP_GSTREAMER)

width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
#fps = cap.get(cv2.CAP_PROP_FPS) #doesn't work with python in my case so forcing below...you may have to adjust for your case
fps = 30

if not cap.isOpened():
   print('Failed to open camera')
   exit

print('Source opened, framing %dx%d@%d' % (width,height,fps))


writer = cv2.VideoWriter("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay ! udpsink port=5001", cv2.CAP_GSTREAMER, 0, float(fps), (int(width),int(height))) 

# For NVIDIA using NVMM memory 
#writer = cv2.VideoWriter("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 ! h264parse ! rtph264pay ! udpsink port=5001", cv2.CAP_GSTREAMER, 0, float(fps), (int(width),int(height))) 

if not writer.isOpened():
   print('Failed to open writer')
   cap.release()
   exit


while True:
    ret_val, img = cap.read();
    if not ret_val:
        break

    writer.write(img);
    cv2.waitKey(1)

writer.release()
cap.release()

This should stream to localhost on port 5001, and you should be able to receive on Linux host running X (expect up to 10 seconds to setup) with:

gst-launch-1.0 udpsrc port=5001 ! application/x-rtp,media=video,encoding-name=H264 ! queue ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! xvimagesink

If you want to stream to a given host, set host property of udpsink while disabling auto-multicast:

writer = cv2.VideoWriter("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay ! udpsink port=5001 host=<target_IP> auto-multicast=0

If you want to use multicast (better avoid with wifi):

writer = cv2.VideoWriter("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay ! udpsink port=5001 host=224.1.1.1

# And you may receive on any LAN Linux host host with:
gst-launch-1.0 udpsrc  multicast-group=224.1.1.1 port=5001 ! application/x-rtp, media=video,encoding-name=H264 ! rtph264depay ! h264parse !  avdec_h264 ! videoconvert ! xvimagesink
SeB
  • 1,159
  • 6
  • 17
  • I am actually using windows machine, and gstreamer is not in this host. FFMPEG is available. Now I am trying to build opencv with gstreamer in windows. Is it possible to take local camera feed in H264 format and stream it to remote system using UDP and FFMPEG? – ariharan vijayarangam Feb 21 '22 at 11:55
  • Installing gstreamer on Windows should be quite easy. Install binaries and dev packages for your architecture and it should be ok. Then you may use cmake gui for configuring opencv build and enable gstreamer support. You may also better explain your case for better advice. ffmpeg should be able to read a h264 stream and forward, but if you want to process that stream with opencv in between, this is not trivial. I have never been able to stream RTP/UDP with opencv VideoWriter and FFMPEG backend. It might be possible, not digged that much, but I thik that gstreamer would give better options. – SeB Feb 21 '22 at 21:16
  • I am using windows 10 with anaconda development environment. So my packages are managed by anaconda. After knowing that the Gstreamer is not directly shipped with OpenCV, I followed the instruction given in [opencv+gstreamer install instruction](https://galaktyk.medium.com/how-to-build-opencv-with-gstreamer-b11668fa09c). When I configure the build, it always says no for Gstreamer option. – ariharan vijayarangam Feb 28 '22 at 12:42