1

I am using dynamoDB and I am getting read and write ProvisionedThroughputExceededException

How can I solve this ? Can using DAX ensure - that I do not get this error ?

Abdeali Chandanwala
  • 8,449
  • 6
  • 31
  • 45

1 Answers1

1

DAX is a write-through cache, not write-back. Which means if a request is a cache miss, DAX makes the call to DynamoDB on your behalf to fetch the data. In this model, you are responsible for managing DynamoDB table capacity You may want to consider using AutoScale with DAX, but it depends on your access patterns.

  • Thanx for confirming, well I did consider using auto scaling feature to be on and its currently on as well - but still I get those exception - but I believe DAX will completely solve the issue - as per your link provided - docs has mentioned and I quote "DynamoDB Auto Scaling is designed to accommodate request rates that vary in a somewhat predictable, generally periodic fashion. If you need to accommodate unpredictable bursts of read activity, you should use Auto Scaling in combination with DAX (read Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads)" – Abdeali Chandanwala Jan 10 '18 at 05:40
  • That is true for cache hits. But you still need to plan for cache misses. – Abdelrahman Elhaddad Jan 10 '18 at 05:45
  • sorry what is cache misses - I do not understand it - please explain or refer a link – Abdeali Chandanwala Jan 10 '18 at 07:02
  • A cache miss is when u make a request and the item is not in the cache. See [documentation][1] for more info. [1]: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.consistency.html#DAX.consistency.item-cache – Abdelrahman Elhaddad Jan 10 '18 at 07:32
  • 1
    Thanx for the doc link - as per the docs "If a write to DynamoDB fails for any reason, including throttling, then the item will not be cached in DAX and the exception for the failure will be returned to the requester. This ensures that data is not written to the DAX cache unless it is first written successfully to DynamoDB." -- THIS MEANS SPIKES WONT BE HANDLED AS WE REQUIRE. BEATS THE COMPLETE PURPOSE OF PUTTING DAX - IT WILL JUST IMPROVE RESPONSE TIME WHEN YOU DO READ OPERATIONS – Abdeali Chandanwala Jan 11 '18 at 06:35
  • DAX's purpose is not to handle write spikes; not with the current write-through model. Spikes in reads however can be handeled if they are cache hits; when the item is available in the cache (not fetched from ddb), and the TTL haven't expired yet. It really depends on your workload and what access patterns your application requires to determine if DAX is a good fit for your use case. – Abdelrahman Elhaddad Jan 11 '18 at 20:33