1

I have seen a lot of people mentioning, that a way to deal with limited WCUs in DynamoDB is to send your requests in a Queue and let the queue insert the data into dynamodb in a way that will not go over your allocated WCUs.

Does anyone have any examples of this? I am currently working with aws lambda in python and nodeJS.

What I understand:

If lambda wants to put 2,000 items into dynamodb and we only have 100 WCUs, instead of having lambda retry and or waiting between each requests.

We can send the items to a SQS queue which will then input the items at a rate of 100WCUs per second.

Is this the right workflow?

WeCanBeFriends
  • 641
  • 1
  • 10
  • 23

1 Answers1

1

You might use Redis. Especially if you are inserting into the same partition over and over again (called 'hot partition') Redis provides random selection from a set. Simply you queue (insert) them into Redis and read from another process constantly and insert into db. On Nodejs and Python redis has thousands of example and pretty handy to use :)

There is an AWS Blog entry on caching. Have a look at it. Also recommends this pattern.

Although the number of use cases for caching is growing, especially for Redis, the predominant workload is to support read-heavy workloads for repeat reads. Additionally, developers also use Redis to better absorb spikes in writes. One of the more popular patterns is to write directly to Redis and then asynchronously invoke a separate workflow to de-stage the data to a separate data store (for example, DynamoDB).

There are probably more ways of achieving what you want maybe even in a simpler way but Redis goes with literally everything maybe you are even using it already ;)

Community
  • 1
  • 1
Can Sahin
  • 1,156
  • 7
  • 12