I have inherited an application running on AWS after the original developer left. I don't understand how part of it is set up.
The application is a Lambda (written in Python but I don't think that's significant here) that accepts events from a SQS queue and writes to a Kinesis Firehose that delivers data to an OpenSearch repository (the AWS clone of ElasticSearch). That part I understand.
The application also somehow writes all handled events to an S3 bucket. If the delivery to OpenSearch is successful the key is Data/%Y/%m/%d/%H/ where Y,m,d,H are the year, month, day, and hour of the request and guid is a globally unique ID string. If the delivery to OpenSearch fails the event is written to the same bucket with the key Data/elasticsearch-fail/%Y/%m/%d/%H/. This is not in the Python code for the Lambda.
I can't figure out where this write to S3 is configured. It isn't part of the lambda. I don't see anywhere in the set-up for either the OpenSearch or the Firehose delivery stream that would do it. But I am certain I am missing something.