It seems like the best option is reading the images using OpenCV, draw the data over the image, and write the image to FFmpeg as a raw video frame.
FFmpeg supports an overlay filter, that you may use for placing an image on other image, but it's difficult to use the overlay filter using pipes, because it requires two input streams.
Using two input streams with pipes requires usage of "named pipes" - the solution is going to be complicated...
I suggest the following:
- Read the first image for getting the video size (resolution).
- Configure FFmpeg for
rawvideo
input format.
When using raw video, we also need to configure the video size, and the pixels format.
The video size is cols x rows.
The pixel format is bgr24 (matches OpenCV pixel format).
- Iterate the list of JPEG files.
- Read the image file using OpenCV
the image is a NumPy array with shape (row, cols, 3).
- Draw the data on the image.
The code sample draws the name of the file.
(Drawing nice telemetry data exceeds the scope of my answers).
- Write the image (with the data) to stdin pipe of FFmpeg sub-process.
Here is a complete code sample:
import ffmpeg
import cv2
#import pandas as pd
# Build 10 synthetic images for testing: im001.jpg, im002.jpg, im003.jpg...
################################################################################
ffmpeg.input('testsrc=duration=10:size=192x108:rate=1', f='lavfi').output('im%03d.jpg').run()
################################################################################
# List of JPEG files
#jpeg_files = df.zed2.to_list()
# List of 10 images for testing
jpeg_files = ['im001.jpg', 'im002.jpg', 'im003.jpg', 'im004.jpg', 'im005.jpg', 'im006.jpg', 'im007.jpg', 'im008.jpg', 'im009.jpg', 'im010.jpg']
# Read the first image - just for getting the resolution.
img = cv2.imread(jpeg_files[0])
rows, cols = img.shape[0], img.shape[1]
# Concatenating images to create video - set input format to raw video
# Set the input pixel format to bgr24, and the video size to cols x rows
process = ffmpeg.input('pipe:', framerate='20', f='rawvideo', pixel_format='bgr24', s=f'{cols}x{rows}')\
.output('/tmp/video.mp4', vcodec='libx264', crf='17', pix_fmt='yuv420p')\
.overwrite_output().run_async(pipe_stdin=True)
for in_file in jpeg_files:
img = cv2.imread(in_file) # Read image using OpenCV
# Draw something (for testing)
img[(rows-30)//2:(rows+30)//2, 10:-10, :] = 60
cv2.putText(img, str(in_file), (cols//2-80, rows//2+10), cv2.FONT_HERSHEY_PLAIN, 2, (255, 30, 30), 2)
# Display image (for testing)
cv2.imshow('img', img)
cv2.waitKey(100)
# Write raw video frame to stdin pipe of FFmpeg sub-process.
process.stdin.write(img.tobytes())
process.stdin.close()
process.wait()
cv2.destroyAllWindows()
- The code sample starts by creating 10 JPEG images - for testing the code.
- The sample display the images for testing.
Sample video frame:
