6

I have an application using DynamoDB and I noticed they just implemented autoscaling which is awesome. I love the concept and the timing for my app is pretty perfect. However I am still getting some issues that I wonder if I can't tweak settings to remove.

My application gets definite spikes in usage so I think this is an ideal thing to use, however with autoscaling on I still am getting some throttling. Here is my read graphs for the last 12 hours: autoscaling

As you can see, when it spikes the usage is set low, so it throttles for a minute or two until the update kicks in, then works. That's ok I guess and better than not scaling, but I would like it not to throttle at all...

Is there any way to tell DynamoDB to never throttle unless it goes over 100 (or 200 or whatever I set as the top limit)? Just if it gets a surge turn up the throughput for 15 minutes or whatever until the surge is over?

sfaust
  • 2,089
  • 28
  • 54
  • Sure -- set your minimum capacity higher. :) Speaking of which, what values are you using for that and for your target threshold? Seriously, of course, you can't use capacity that doesn't actually exist. Also, it looks like you are doing a lot of scanning, which is relatively costly in terms of capacity consumption. – Michael - sqlbot Jun 21 '17 at 02:42
  • ha, well true I suppose but the point was to avoid extra costs :). I was set to 5 minimum, 200 max, and 70% utilization target. I bumped to 50% utilization to see what that does... I will look at my code and see what excessive scanning I may be doing as well, thanks for the tip... – sfaust Jun 21 '17 at 14:51

1 Answers1

4

Autoscaling uses CloudWatch. You can see these alarms by going to the CloudWatch dashboard and look for the alarms that include your table name and "DO NOT EDIT OR DELETE" in the description.

Why am I telling you this?

Well, the CloudWatch has some minimal period granularity. Currently it's 1 minute. It means that it will wait at least for 1 minute before firing any event to its listeners. Therefore, it will take at least one minute after the load starts until the capacity is increased. Actually it will be even more, since increasing the capacity also takes time. Bottom line: if you have very large spike, some requests may be throttled, since the the auto-scaling will not yet take effect and bursting can be exhausted.

The simple, but costly, solution will be increasing the initial capacity.

If you know about the upcoming spike in advance (e.g. you have some job running periodically, or customer peaks in certain times) you can use API to modify autoscaling programmatically.

Tarlog
  • 10,024
  • 2
  • 43
  • 67
  • Hm. API may be ideal since I had a previous script to change the throughput on specific timers, it's possible I could just change autoscaling on the same timeframes. However the examples there are all java, do you know if it's java only right now or can you do this with PHP? I don't see any of the autoscaling classes in the PHP API... – sfaust Jul 17 '17 at 19:25
  • I'm pretty sure that there is no direct PHP support. You can use AWS CLI. I'm not really into PHP, but can you invoke CLI from PHP? I've used this approach for my scripts. – Tarlog Jul 17 '17 at 23:30
  • Hmm, not sure. I also have a web host so I don't know if I can install CLI on my host... Thanks for the idea, though, checking into it... – sfaust Jul 18 '17 at 15:44
  • If you running your hosts in AWS (and I bet you are) and if you are using Amazon Linux AMI [https://aws.amazon.com/amazon-linux-ami/] you have AWS CLI preinstalled. – Tarlog Jul 18 '17 at 17:30
  • No another hosting company (InMotion hosting...) – sfaust Jul 18 '17 at 17:33