0

I am using lambda function which triggers when a new file is added to S3 bucket. Lambda function transforms the contents of the file and places it in another S3 bucket.

  1. when a new file 'emp.json' is added to 'sourcebucket/test' folder in the S3 bucket, a new logstream is created in the cloudwatch logs and the file is added to 'targetbucket/emp' folder.This is as expected.

  2. when i add another file 'emp1.json' to 'sourcebucket/test' folder in the S3 bucket before 5 mins, the same logstream is appended with the logs and the existing file gets replaced in the 'targetbucket/emp' folder.Instead of appending logs to the existing logstream, can we create a new log stream?

  3. when i add another file 'emp2.json' to 'sourcebucket/test' folder in the S3 bucket after 5 mins, then a new logstream is created in the cloudwatch logs and the file is added to 'targetbucket/emp' folder.This is also working fine

The problem is only when i add a new file to the same folder in like less than 5 mins, it is overwriting or replacing the existing file.I am new to AWS lambda.Let me know if this can be fixed.

so, the

Usha
  • 9
  • 1
  • 2

1 Answers1

0

The issue you're describing about the CloudWatch log streams is related to how the underlying Lambda containers are used.

  • When a new container is used to serve an invocation to the function, the service creates a new log stream and pushes all the Lambda function logs to that.
  • In case an existing container is "reused", then the log stream created for the container when it was first spun up is used and the function logs are appended to the same log stream.

Thus, each underlying container would have a single log stream associated with it.

Now, it is unclear why you say the Lambda function fails (or works unexpectedly) when logs are appended to an existing log stream, that is, a container is reused. Please provide more insight on that.

Paradigm
  • 1,876
  • 1
  • 12
  • 16