8

Heres my setup; - I have a local PC running ffmpeg with output configured to h.264 and aac - and S3 bucket created at AWS

what i need to do is, use ffmpeg [local] output to upload files directly to s3 bucket. PS: Planing to use that s3 bucket with cloudfront to allow 1 [one] user to stream a live event with about setup.

i could not find a way to specify output location as s3 bucket [with key]. any ideas as to how to do it? Thanks

eric
  • 199
  • 2
  • 3
  • 11
  • If you weren't segmenting, you could probably pipe ffmpeg to curl. Just a thought, depending on your specific needs... – Brad Jul 21 '15 at 15:03
  • actually i am using this setup for uploading content from source to distribution point. so i think piping approach can work.. never used curl before though. need to do some research on it. tnx for the insight. – eric Jul 22 '15 at 06:35

2 Answers2

9

You can:

  1. Mount the S3 bucket using S3FS FUSE and then you output directly to it.

    How to Mount S3 Bucket on CentOS/RHEL and Ubuntu using S3FS

  2. Segment the media for HTTP streaming and upload each segment and playlists using the S3 API and a script of your choice.

I'd go with 1 for a live stream.

aergistal
  • 29,947
  • 5
  • 70
  • 92
  • Thank you @aergistal . Yes I'll go with option 1. – eric Jul 21 '15 at 08:39
  • just a side question, will ffmpeg and/or S3FS take care of uploads if my internet connection suddenly go down during the live event? or do i have to add a third step to handle it? tnx – eric Jul 21 '15 at 08:42
  • 1
    I'm not sure what you're asking. If the connection goes down you cannot upload anything. – aergistal Jul 21 '15 at 08:49
  • yes, lets say my internet connection went off during 1.30min till 2.00min of the event. once the internet connection is ok, i assume it'll start uploading from 2.00min. what i am asking can we store the video between 1.30min to 2.00min on local machine and make it upload from 1.30min once the internet connection is ok – eric Jul 21 '15 at 08:57
  • The command will fail if it cannot output to the destination directory. I guess one solution is to use HLS Event playlists which keep all media segments since the beginning of the event. Instead of writing to the mount point you tell FFmpeg to output to a local directory and then you `rsync` that directory with the S3 mount. This way if the connection goes down the files will be still written and when it goes back on they will be synced. The clients will be able to seek back to the point the connection stopped. – aergistal Jul 21 '15 at 09:05
  • see: https://developer.apple.com/library/ios/technotes/tn2288/_index.html#//apple_ref/doc/uid/DTS40012238-CH1-EVENT_PLAYLIST – aergistal Jul 21 '15 at 09:06
  • great... :) Thank you. – eric Jul 21 '15 at 09:18
  • s3fs and similar object storage file systems do not provide full posix semantics so they do NOT work for ffmpeg output. It requires seeking and maybe random updates. – jamshid Feb 11 '20 at 04:28
3

It could be late to answer this question, but I think this may be useful for others.

You can use this method to write output file to your S3 server.

ffmpeg -re -i in.ts -f hls -method PUT http://example.com/live/out.m3u8

Read more in https://ffmpeg.org/ffmpeg-all.html#hls-2

Keijack
  • 778
  • 6
  • 12
  • This doesn't seem to allow for video output files? ``` ffmpeg -loglevel warning -i 'http://storage.example.com/mybucket/ElephantsDream.mp4' -movflags +faststart -f mp4 -ss 0 -to 100 -vcodec copy -method POST http://storage.example.com/output/out.mp4 [mp4 @ 0x563e30efa460] muxer does not support non seekable output Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument Error initializing output stream 0:1 -- ``` – jamshid Jan 22 '20 at 01:58
  • S3 requires PUT not POST. – PRMan Nov 02 '21 at 19:31