2

I've got two AWS lambda functions connected together via an SQS queue. They've been working correctly for a while, but recently the size of the payload has increased and now it's broken the 256Kb limit.

After reading Question 43738341 like to use the AmazonSQSExtendedClient. I've looked at the code at github and have made my first lambda function send messages correctly (small payloads go via SQS, larger payloads are written to S3).

What I'm struggling with is receiving messages: The entry point for my second lambda looks like:

public class SqsHandler implements RequestHandler<SQSEvent, Void> {

  public Void handleRequest(SQSEvent event, Context context) {
    SQSEvent.SQSMessage record = event.getRecords().get(0);

    System.out.println("0. record " + record.toString());
    System.out.println("1. eventSource " + record.getEventSource());
    System.out.println("2. eventSourceARN " + record.getEventSourceArn());
    System.out.println("3. MessageId"  + record.getMessageId());
    System.out.println("4. ReceiptHandle " + record.getReceiptHandle());
    System.out.println("5. Body " + record.getBody());
  }
}

When my entry point is called I've already received my SQS event. I don't (and shouldn't?) know the queue it's come from.

The code sample on GitHub (which is copied almost line by line onto all of the other sites) the sender and receiver are both the same lambda. Consequently it's able to create a ReceiveMessageRequest object because it knows the queue URL.

In a real system the sender and receiver are never the same. I could even be receiving data from multiple Lambdas over multiple queues

What I don't understand is how the receiving Lambda should be written. The sample code on the AWS Website says:

final ReceiveMessageRequest receiveMessageRequest =
        new ReceiveMessageRequest(myQueueUrl);
List<Message> messages = sqsExtended
        .receiveMessage(receiveMessageRequest).getMessages();

But this requires me to know the url of the queue. It also doesn't tie into the SQSEvent, which will need to be consumed.

Stormcloud
  • 2,065
  • 2
  • 21
  • 41

2 Answers2

4

The message body of an SQSEvent record where the payload exceeds 256KB should contain a JSON string representing an S3 pointer that consists of the s3BucketName and s3Key attributes for where the actual payload is stored. See the MessageS3Pointer class definition and storeMessageInS3 method of the AmazonSQSExtendedClient class for reference. With this information you should then be able to fetch the message content directly from S3 without relying on the SQS Extended Client Library within your Lambda event handler.

Robert Lysik
  • 471
  • 1
  • 3
  • 10
  • 1
    Thanks. If I'm understanding this right then what I need to do is write code to read the body, decide if the payload is stored on S3 and read it. I guess it's not rocket science, but I was expecting that to be part of the AmazonSQSExtendedClient API. – Stormcloud Jul 17 '19 at 11:19
0

In the receiver, replace

for( var rec in event.Records ){
    const msg = JSON.parse( event.Records[rec].body );

with

for( var rec in event.Records ){
    const s = await s3.getObject({Bucket:process.env.SQS_BODY_BUCKET,Key:event.Records[rec].body}).promise();
    const msg = JSON.parse( s.Body );
Tom Chiverton
  • 670
  • 6
  • 18