I have an Amazon SQS queue and a dead letter queue.
My python program gets a message from the SQS queue and then, if it raise an exception, it will send the message to the dead letter queue.
Now I have a program that checks dead letter queue if those messages can still be processed. If it is, it will be sent back to main SQS queue. You see, what I expect here is an infinite loop of sorts in my testing but apparently, the message disappears after 2 tries. Why is it like this?
When I put an extra field in the message (which is random value) it somehow does what I expect (infinite loop of sending back and forth). Is there a mechanism in SQS that prevents what I do when message is the same?
def handle_retrieved_messages(self):
if not self._messages:
return None
for message in self._messages:
try:
logger.info(
"Processing Dead Letter message: {}".format(
message.get("Body")
)
)
message_body = self._convert_json_to_dict(message.get("Body"))
reprocessed = self._process_message(
message_body, None, message_body
)
except Exception as e:
logger.exception(
"Failed to process the following SQS message:\n"
"Message Body: {}\n"
"Error: {}".format(message.get("Body", "<empty body>"), e)
)
# Send to error queue
self._delete_message(message)
self._sqs_sender.send_message(message_body)
else:
self._delete_message(message)
if not reprocessed:
# Send to error queue
self._sqs_sender.send_message(message_body)
self._process_message will check if message_body has reprocess flag set to true. If true, send it back to main queue.
Now I made the contents of the message with error so every time it is processed in Main queue, it will go to dead letter. And then I expect this to keep on loop but SQS looks like has a mechanism to stop this from happening (which is good).
Question is what setting is that?