2

I am using ffmpeg to read a rtmp stream, add a filter such as a blur box and create a different rtmp stream.

the command for example looks like:

ffmpeg -i <rtmp_source_url> -filter_complex "split=2[a][b];[a]crop=w=300:h=300:x=0:y=0[c];[c]boxblur=luma_radius=10:luma_power=1[blur];[b][blur]overlay=x=0:y=0[output]" -map [output] -acodec aac -vcodec libx264 -tune zerolatency -f flv <rtmp_output_url>

where rtmp_source_url is where the camera/drone is sending the flux and rtmp_output_url is the resuting video with the blur box.

the blur box need to move, either because the target moved or the camera did. I want to do so without interrupting the output streaming.

I am using fluent-ffmpeg to create the ffmpeg process while a different part of the program compute where the blur box shall be.

thanks for you help and time!

Carlo Capuano
  • 382
  • 3
  • 6
  • 1
    Some filters' parameters can be changed at runtime: https://ffmpeg.org/ffmpeg-filters.html#Changing-options-at-runtime-with-a-command for example: crop filter, but not overlay filter. – pszemus Sep 24 '21 at 14:32

1 Answers1

1

Consider using a pipe to split up the processing. See here - https://ffmpeg.org/ffmpeg-protocols.html#pipe

The accepted syntax is:

pipe:[number]
number is the number corresponding to the file descriptor of the pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If number is not specified, by default the stdout file descriptor will be used for writing, stdin for reading.

For example to read from stdin with ffmpeg:

cat test.wav | ffmpeg -i pipe:0
# ...this is the same as...
cat test.wav | ffmpeg -i pipe:
For writing to stdout with ffmpeg:

ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
# ...this is the same as...
ffmpeg -i test.wav -f avi pipe: | cat > test.avi

For example, you read a rtmp stream, add a filter such as a blur box and create a different rtmp stream. So, the first step is is to separate the incoming and outgoing stream -

ffmpeg -i <rtmp_source_url> -s 1920x1080 -f rawvideo pipe: | ffmpeg -s 1920x1080 -f rawvideo -y -i pipe: -filter_complex "split=2[a][b];[a]crop=w=300:h=300:x=0:y=0[c];[c]boxblur=luma_radius=10:luma_power=1[blur];[b][blur]overlay=x=0:y=0[output]" 
-map [output] -acodec aac -vcodec libx264 -tune zerolatency -f flv <rtmp_output_url>

I do not know what criteria you have to vary the blur box, but now you can process the incoming frame in the second ffmpeg. Also, I used 1920x1080 as the video size - you can replace it with the actual size.

For the first iteration, do not worry about the audio, do your blur operation. As we are feeding rawvideo - in this example the audio will be ignored.

moi
  • 467
  • 4
  • 19
  • Hi thanks for your time to show me this option in ffmpeg. but I fail to see how do I update the blur box definition. or it is possible to change 1 part of this without interrupting the final output stream? or you mean I should pipe through a different tool (or made one) to be part of the pipe? basically do not use ffmpeg for the blur box. – Carlo Capuano Sep 27 '21 at 15:26
  • I do not know what the blur box depends on. It sounds like you need to do some custom processing. In this case, build a program using libav to access ffmpeg API. This custom filter will be the second stage of your pipe. – moi Sep 27 '21 at 22:06