0

We are incorporting a FIFO SQS which will be recieving message through Lambda during exception. Due to some implication, I am not able to set the MessageGroupId with group Identifier such as SessionId or CustomerId, so that there is strict ordering among them. So I am planning to use an UUID as a MessageGroupId. I also thought of using a static MessageGroupId for all messages pushed in that queue, but there is a limit on inflight messages with same MessageGroupId i.e., 20k messages. So, I want to know Is there any ordering maintained among the messages with unique MessageGroupId based on Timestamp? If not how can I achieve it?

  • What attribute associated with your messages do you actually want to use as the group ID to ensure FIFO among those grouped messages? If it's customer ID, why can't you use customer ID? – jarmod May 20 '22 at 16:00
  • I am pushing a batch of DDB Events to SQS. So, when the Lambda fails to process the batch of DDB Events via DDB Stream, I am pushing the whole batch to SQS. So, I would have multiple CustomerId or so in that batch. – Sumit Prasad May 20 '22 at 16:50
  • If you want the SQS messages to be processed in the same order as they were going to be processed by the Lambda function then you can use the Lambda request ID as the SQS FIFO group ID (and send the messages to SQS in the order you want). – jarmod May 20 '22 at 22:13
  • Got it. But how it is ensured that order is maintained among multiple messages, is it a feature that the sequence is maintained when lambda pushes a message to SQS. – Sumit Prasad May 21 '22 at 03:06
  • See SQS FIFO [delivery logic](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-understanding-logic.html). Messages are ordered based on time received by SQS within each message group (as indicated by message group ID). – jarmod May 21 '22 at 11:02
  • But groupId / requestId will change for every message. How this delivery logic will matter here. The blog only comments on Same group Id – Sumit Prasad May 21 '22 at 12:22
  • Ah OK, I was mistakenly under the impression that a single Lambda function was populating the SQS FIFO queue with multiple messages. Be aware of how DynamoDB Streams [Lambda error handling](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-dynamodb-errors) works. It's going to do ordered retries per shard for you, for up to 24 hours. – jarmod May 21 '22 at 17:15
  • Let me reiterate once, there is a DDB, A lambda and an Fifo SQS. The DDB Stream triggers the lambda whenever there is an update. The Lambda basically reads the update and syncs the Data to Elastic Search. I am adding an SQS in the code, so that whenever there is an Exception from ES, I am gracefully pushing the DDB update to SQS i.e., without raising Exception. Apart from this, whenever there is a message in the SQS, we will add same lambda to SQS once assured that ES is green for updation. I will not raise any exception whenever DDB updates failed to process. I will raise exception for SQS – Sumit Prasad May 22 '22 at 15:10
  • I'm saying that may be a problematic architecture so you should first decide whether or not the existing, native DynamoDB Streams & Lambda's error handling behavior will suffice for you. If you move failed messages to a second SQS queue (which is effectively acting as a DLQ) then new messages that arrive from DynamoDB Streams could presumably be processed out of sequence (e.g. if Elasticsearch recovers before you have emptied your DLQ). – jarmod May 22 '22 at 15:46

0 Answers0