I have a python program running on an embedded Linux board (like a raspberry pi), and another python program running on my PC.
I am trying to use the imageio
library as a way of capturing frames from the webcam (e.g., pi camera) of the linux board, send these to my PC via a TCP connection, and then read the frames as numpy
arrays on my PC, all with the lowest latency.
(Note: my previous approach was using opencv
to compress frames into low-quality jpeg before sending them through TCP on the board and had a latency of ~0.25-0.5s on my local network, but I think it is suboptimal, and I do not want to use opencv
).
Thus far I have been doing something like the following, but I think my approach is dumb (and it has a latency > 1s):
# Linux board:
for idx, frame in enumerate(iio.imiter("<video0>", size=(resolution[0], resolution[1]), fps=fps)):
data = pkl.dumps(frame, 0)
framelen = len(data)
connection.write(struct.pack('<L', framelen))
connection.flush()
connection.write(data)
# PC:
while True:
framelen = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
if not framelen:
break
data = connection.read(framelen) # raw undecoded pickle bytestring
frame = pkl.loads(data, fix_imports=True, encoding="bytes")
Is there a way of doing something more clever/efficient, like using imageio
for sending the encoded/compressed video at the desired resolution and framerate from the board through my python socket and decompressing them on my PC, please?
Thanks.