4

Is there a way to dynamically stop consuming events, when using aws lambda's built-in event source mapping? In the example diagram I would rely on the Big Service's healthceck to make that decision.

So far I know that if Big Service is down, I could retry processing and eventually put the message in a DLQ. I would prefer to keep the messages in the original queue and thus preserve their order without having to manage processing from DLQ and the FIFO when Big Survice is back.

The red X signifies a failing healthcheck

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
nnedoklanov
  • 301
  • 2
  • 7

3 Answers3

1

I didn't try this but one option could be;

  • create another lambda to make health check requests to big service
  • create an EventBridge rule to trigger health check lambda periodically(1 minute)
  • if the service is down use UpdateEventSourceMapping's Enabled option to disable the source mapping between lambda and sqs.
  • When the service is up again, use UpdateEventSourceMapping again to enable the mapping between sqs and lambda.

One of the drawback is that;

EventBridge does not provide second-level precision in schedule expressions. The finest resolution using a cron expression is a minute.

Ersoy
  • 8,816
  • 6
  • 34
  • 48
0

There is no function to temporarily/dynamically stop Lambda consuming the vents.

The only option would be to remove the trigger to prevent Lambda from being activated when messages arrive in the Amazon SQS queue.

Then, when things are okay again, add the trigger back. I haven't tried attaching a Lambda trigger where there are messages already in the queue, but hopefully the will be processed.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
0

I found a way to achieve this via Lambda's reserved concurrency.

As stated in the docs:

To throttle a function, set the reserved concurrency to zero. This stops any events from being processed until you remove the limit.

Lambda SDK has a handy method to set the concurrency.

putFunctionConcurrency(params = {}, callback) ⇒ AWS.Request 

And when the downstream service is back, I could delete that setting and resume at the previous pace:

 deleteFunctionConcurrency(params = {}, callback) ⇒ AWS.Request 

My design now is to have a second lambda function monitoring the downstream service's health. When downstream is down, I will set the reserved concurrency to 0, and when it is back up, I will delete the concurrency setting. I am still thinking if I can make the function trigger on a cloudwatch event or trigger it on a time interval, but that is a different question.

nnedoklanov
  • 301
  • 2
  • 7
  • I am going forward with this solution, as the config of concurrency seems easier to achieve. There is less information required in the payload. I will take the EventBridge idea from Ersoy's answer. – nnedoklanov Feb 08 '21 at 09:41