I see a ton of info about piping a raspivid stream directly to FFMPEG for encoding, muxing, and restreaming but these use cases are mostly from bash; similar to:
raspivid -n -w 480 -h 320 -b 300000 -fps 15 -t 0 -o - | ffmpeg -i - -f mpegts udp://192.168.1.2:8090ffmpeg
I'm hoping to utilize the functionality of the Picamera library so I can do concurrent processing with OpenCV and similar while still streaming with FFMPEG. But I can't figure out how to properly open FFMPEG as subprocess and pipe video data to it. I have seen plenty of attempts, unanswered posts, and people claiming to have done it, but none of it seems to work on my Pi.
Should I create a video buffer with Picamera and pipe that raw video to FFMPEG? Can I use camera.capture_continuous() and pass FFMPEG the bgr24 images I'm using for my OpenCV calculation?
I've tried all sorts of variations and I'm not sure if I'm just misunderstanding how to use the subprocess module, FFMPEG, or I'm simply missing a few settings. I understand the raw stream won't have any metadata, but I'm not completely sure what settings I need to give FFMPEG for it to understand what I'm giving it.
I have a Wowza server I'll eventually be streaming to, but I'm currently testing by streaming to a VLC server on my laptop. I've currently tried this:
import subprocess as sp
import picamera
import picamera.array
import numpy as np
npimage = np.empty(
(480, 640, 3),
dtype=np.uint8)
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
camera.start_recording('/dev/null', format='h264')
command = [
'ffmpeg',
'-y',
'-f', 'rawvideo',
'-video_size', '640x480',
'-pix_fmt', 'bgr24',
'-framerate', '24',
'-an',
'-i', '-',
'-f', 'mpegts', 'udp://192.168.1.54:1234']
pipe = sp.Popen(command, stdin=sp.PIPE,
stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10**8)
if pipe.returncode != 0:
output, error = pipe.communicate()
print('Pipe failed: %d %s %s' % (pipe.returncode, output, error))
raise sp.CalledProcessError(pipe.returncode, command)
while True:
camera.wait_recording(0)
for i, image in enumerate(
camera.capture_continuous(
npimage,
format='bgr24',
use_video_port=True)):
pipe.stdout.write(npimage.tostring())
camera.stop_recording()
I've also tried writing the stream to a file-like object that simply creates the FFMPEG subprocess and writes to the stdin of it (camera.start_recording() can be given an object like this when you initialize the picam):
class PipeClass():
"""Start pipes and load ffmpeg."""
def __init__(self):
"""Create FFMPEG subprocess."""
self.size = 0
command = [
'ffmpeg',
'-f', 'rawvideo',
'-s', '640x480',
'-r', '24',
'-i', '-',
'-an',
'-f', 'mpegts', 'udp://192.168.1.54:1234']
self.pipe = sp.Popen(command, stdin=sp.PIPE,
stdout=sp.PIPE, stderr=sp.PIPE)
if self.pipe.returncode != 0:
raise sp.CalledProcessError(self.pipe.returncode, command)
def write(self, s):
"""Write to the pipe."""
self.pipe.stdin.write(s)
def flush(self):
"""Flush pipe."""
print("Flushed")
usage:
(...)
with picamera.PiCamera() as camera:
p = PipeClass()
camera.start_recording(p, format='h264')
(...)
Any assistance with this would be amazing!