1

I am completely stuck. I need to be walked through the process for using the AWS-SDK for Ruby to transcode video. I'm not sure where to even begin. I'm uploading files using CarrierWave-Direct to an s3 bucket. The records are uploaded and recalled fine. The uploading is done in the background using Sidekiq. Where do I go from here? How do I kick off the Transcoding job? How do I maintain a record of the files for later streaming in my database? Can I transcode along with my uploading Sidekiq process? I'm ripping my hair out for trying to find a solution for this.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
T. Cole
  • 191
  • 13

1 Answers1

2

The transcoding actions are defined inside the class AWS::ElasticTranscoder.

Transcoding process can be initiated once your upload to s3 has been completed. For a simple transcoder to work, you need to have

  1. A pipeline on which the transcoding will be carried out.
  2. Presets which determines the output video properties (You can either create a preset or use the system presets provided by AWS)

Now initiate AWS::ElasticTranscoder::Client class

transcoder = AWS::ElasticTranscoder::Client.new(:access_key_id => AwsKeyId,:secret_access_key => AwsAccessKey,:region=>TranscoderRegion)

Create pipeline:

transcoder.create_pipeline(options = {
        :name => “test_pipeline”,
        :input_bucket => “bucket_name” , 
        :output_bucket => “bucket_name”,
        :role => Elastic-Transcoder-Default-Role,
        :content_config => {
            :bucket => “bucket_name”,
            :storage_class => Standard
        }
        :thumbnail_config => {
            :bucket => “bucket_name”,
            :storage_class => Standard 
        }
})

This will return a pipelineId that can be used for creating jobs.

Now you can create a job as follows:

 transcoder_obj.create_job({
    :pipeline_id=>PipelineId,
    :input=> {
       :key=>"video_path",
       :frame_rate=> "auto",
       :resolution => "auto",
       :aspect_ratio => "auto",
       :container => 'auto'
    },
    :outputs=>[{
       :key=>"output_file_location",
       :preset_id=>1351620000001-000010,
       :thumbnail_pattern=>"thumbnails/thumb_{count}"
    }]
})

This will start the transcoding process. You can check the status of the job by using read_job method. Once the status becomes 'Completed' from 'Progressing', the output files will be there in the specified output bucket.

Please go through these links:

http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/ElasticTranscoder/Client.html http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/system-presets.html

AWS::SNS provides a better way of getting notified about the transcoding job status. You can subscribe to an sns topic for getting the jobs status. The response for a completed job will contain the necessary details about the output files, so that you can store it in database for future streaming.

For more info, refer this link

Vijith mv
  • 404
  • 7
  • 22
  • The code samples are provided may not be following the accurate syntax. Please refer the documentation :) – Vijith mv Apr 13 '16 at 06:19
  • In relation to CarrierWave, how do I do this? I would put this code in my Upload model folder after the "upload to s3" code, no? How would I set a reference to the new url that would be needed to retrieve the video from s3? – T. Cole Apr 13 '16 at 12:11
  • I'm not at my machine right now, but I will update my question later with some code. – T. Cole Apr 13 '16 at 12:13
  • The transcoding actions can be written in a model method and invoke after your input video details has been saved to the database. Please note that pipeline creation is a one time process and should be done before any other actions being carried out. – Vijith mv Apr 13 '16 at 13:13
  • The input bucket specified at the time of pipeline creation must be the same bucket to which you are uploading the video. Also, I would suggest using s3_direct_upload gem instead of carrierwave for uploading files with larger size https://github.com/waynehoover/s3_direct_upload – Vijith mv Apr 13 '16 at 13:13
  • The actual uploading process works fine. I'm getting the files to s3 and I am able to recall the link for them. However, where I'm stuck, is using the elastic transcoder after the file is uploaded. Once the file is up, how do I relate the video to the info in the database? Does the URL from before(the input bucket) persist somehow? I understand how to kick off the transcoder (after a lot of reading). I need to know how to relate the video file from the OUTPUT BUCKET to a URL and my DB info so I can stream them later. – T. Cole Apr 13 '16 at 13:30
  • You can subscribe to SNS notifications for that. Kindly go through that also.You have to specify the notification details while creating pipeline itself. – Vijith mv Apr 14 '16 at 04:29
  • How would the SNS system allow me to connect and maintain records in my database? I thought that the SNS system was for notifying success after transcoding. – T. Cole Apr 14 '16 at 04:37
  • SNS will notify the output video filepath and other details upon completing the process. You have to utilize it wisely :) – Vijith mv Apr 14 '16 at 05:02