0

I am using a DynamoDB Stream (non-Kinesis version) and I've mapped the stream to a Lambda to process events.

Two things I understand about this stream are:

  1. If the Lambda fails, it will automatically retry with the stream event.
  2. DynamoDB stream will only keep the record for up to 24 hours.

My concern is that I want to be able to make sure my Lambda never misses a DynamoDB event, even if the Lambda is failing for more than 24 hours.

How can I ensure that the stream records are not lost forever if my Lambda fails for an extended period of time?

My initial thought is to treat this like I would a Lambda that reads from an SQS queue. I'd like to add a retry policy and DLQ to the Lambda, which would store failed events in a DLQ to reprocess at a later time.

Is this all that needs to be done to achieve what I want? I am struggling on finding documentation on how to do this with DynamoDB Stream. Is DDB Stream behavior any different than an SQS queue?

Duke Silver
  • 1,539
  • 1
  • 17
  • 24

1 Answers1

0

Why would the lambda fail for 24 hours?

My guess is your lambda relies on something downstream which you’re anticipating might be down for a long duration. In that case I’d suggest the lambda decide when to “give up” and it can toss its work items to your own SQS queue for later processing. You can’t keep items in the DynamoDB Stream for longer than the 24 hours, nor does the Stream have a DLQ.

Another option: DynamoDB can stream via Kinesis which has longer retention. The automatic lambda invocation however is only for DynamoDB Streams.

hunterhacker
  • 6,378
  • 1
  • 14
  • 11