1

I have a USB webcam that streams MJPEG video. I'm trying to read the stream into OpenCV on a laptop, do some processing to the frames in OpenCV, and send the stream out over UDP to a raspberry pi.

Using the gstreamer command line interface, I can send the webcam feed directly from the laptop to the raspberry Pi. I run this command on the laptop:

gst-launch-1.0 v4l2src device=/dev/video1 ! image/jpeg,width=640,height=480,framerate=30/1 ! jpegparse ! rtpjpegpay ! udpsink host=10.1.10.77 port=8090

And this from the Pi:

gst-launch-1.0 udpsrc address=10.1.10.77 port=8090 ! application/x-rtp, encoding-name=JPEG,payload=96 ! rtpjpegdepay ! jpegdec ! videoconvert ! fbdevsink device=/dev/fb0

The video shows up in the middle of the Pi's screen and all is well. But, when I try to bring OpenCV into this process, I get confused. The following code sends the video successfully...

import cv2

# Object that pulls frames from webcam
cap_fetch = cv2.VideoCapture(1)
cap_fetch.set(cv2.CAP_PROP_FRAME_WIDTH,640);
cap_fetch.set(cv2.CAP_PROP_FRAME_HEIGHT,480);

# Object that sends frames over gstreamer pipeline
cap_send = cv2.VideoWriter('appsrc ! videoconvert ! video/x-raw,format=YUY2 ! jpegenc ! rtpjpegpay ! udpsink host=10.1.10.77 port=9000', 0, 0 , 30, (640,480))

if not cap_fetch.isOpened() or not cap_send.isOpened():
    print('VideoCapture or VideoWriter not opened')
    exit(0)

while True:
    ret,frame = cap_fetch.read()

    if not ret:
        print('empty frame')
        break

    # do stuff to frame

    cap_send.write(frame)

    cv2.imshow('send', frame)
    if cv2.waitKey(1)&0xFF == ord('q'):
        break

cap_fetch.release()
cap_send.release()

... But it doesn't work if I select any format besides YUY2, nor if I try something like this:

cap_send = cv2.VideoWriter('appsrc ! image/jpeg ! jpegenc ! rtpjpegpay ! udpsink host=10.1.10.77 port=9000', 0, cv2.VideoWriter_fourcc('M','J','P','G'), 30, (640,480))

Any idea why this might be? I'm very new to gstreamer, but I think the working pipeline from OpenCV to the Pi is converting the raw BGR image matrices from openCV to YUY2 video, then converting to MJPEG video, then sending--that doesn't seem efficient, or am I missing something? Is there a cleaner way to do this?

Anna Svagzdys
  • 63
  • 1
  • 7

0 Answers0