2

There is a Windows service which ingests Video files which are delivered by some content providers. Then the Windows Service tries to create renditions for each given video file using Amazon Elastic Transcoder.

For each video file around 15 renditions are created through creating one Job and then adding 15 outputs to it.

This works perfectly until I run my test project a few times in a row. Then I get this error message "Your application is submitting requests to Amazon Elastic Transcoder faster than the maximum request rate".

I get an error when I just test the logic of my Windows service whilst at production capacity this Windows Service will ingest around 50,000 video files every day. That means I will creating 50,000 jobs every day as well. For such a high volume of request Elastic Transcoder seems to be too weak.

Is there a configuration to increase the throttling on Elastic Transcoder? If there is not, what is the actual limit of crating jobs per minute?

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
Aref Karimi
  • 1,822
  • 4
  • 27
  • 46

1 Answers1

1

Here's some documentation I found.

In short:

  • For each region, 4 pipelines per AWS account
  • Maximum number of queued jobs: 100,000 per pipeline
  • You can submit two Create Job requests per second per AWS account at a sustained rate; brief bursts of 100 requests per second are allowed.

In other words, you shouldn't have a problem with 50k jobs a day, so long as you don't submit them at a rate higher than 2 jobs/second for any sustained period of time.

  • Thank you. I will deploy this Windows Service on five different servers so I have to make sure these servers will not submit more than two (or one) request in a second. Is creating a queue , adding the job requests to that queue and then pull the jobs from that queue based on a time interval the best way to make sure I will go above the limit? is there an easier way? – Aref Karimi Sep 10 '15 at 00:02
  • 1
    If you have five different services working independently of one another, you'd have to introduce a 2500 ms downtime between each job request to make sure that you are (statistically) submitting a maximum of 2 jobs per second, you'd still be liable to reach 5 JPS at times, though. Another alternative is to let the services work in shifts, i.e. one service only uploads during the first ten seconds of every minute, the next one the following ten seconds, etc. with a 500 ms grace period between each job creation. – Andreas Eriksson Sep 10 '15 at 12:42
  • I've been working with ET for nearly a year now, but using the web UI. It has been a horrible experience so I finally just built up the data/code needed to use the API and instantly hit these limits. I'm nearly convinced that ET isn't for scale/enterprise use. – rainabba Feb 17 '16 at 11:05