I am utilizing change streams from documentDB to read timely sequenced events using lambda, event bridge to trigger event every 10min to invoke lambda and to archive the data to S3. Is there a way to scale the read from change stream using resume token and polling model? If a single lambda tries to read from change stream to archive then my process is falling way behind. As our application writes couple of millions during peak period my archival process is able to archive atmost 500k records to S3. Is there a way to scale this process? Running parallel lambda might not work as this will lead to racing condition.
Asked
Active
Viewed 129 times
3 Answers
0
can't you use step-functions
? your event bridge fires the lambda which is a step-function, then it can keep the state while archiving the records.

Chathula Sampath Perera
- 349
- 1
- 4
- 12
0
I am not certain about documentDB, but I believe in MongoDB you can create a change stream with a filter. In this way, you can have multiple change streams, each acting on a portion (filter) of data. This allows multiple change streams to work concurrently on one cluster.

barrypicker
- 9,740
- 11
- 65
- 79
-
Thank you barrypicker for you answer. Appreciate if you can help provide an example. – Mr9 Mar 03 '23 at 02:51