I'm sorry that I can't post my code snippets. I have a Go script that scans through the DynamoDB database and makes modifications to the entries. Everything is done sequentially (no go routines are involved). However, when I was running this on a large database, I got a ProvisionedThroughputExceededException. I'm running the script locally.
I'm using aws-sdk-go-v2, which should have a 20-second exponential back-off implementation when this error is triggered. Since provisioned write capacities are on a per-second basis, shouldn't the SDK automatically make the script wait when the capacity is reached, until the next second when newer capacities are allocated? I'm using UpdateItem, PutItem, and DeleteItem operations.
One guess I have is that when I have many requests in a short amount of time, it actually consumes capacity in the future, when the database is busy processing requests made in the past. However, I got the exception after a few seconds of execution, which was way shorter than 20 seconds.
What's the proper way of handling this exception? Catching it, waiting a few seconds and retrying it feels a bit arbitrary. I don't understand why the SDK isn't taking care of this already.