how can I write my results from a file processing step with AWS lambda and python back to a file? I'm reading a file from S3 and looking for a special expressing in each line. If this expression is included, I manipulate the line. As lambda is not able to write to a file (or S3 does not allow this), how can a collect the result of the line transformation and write all transformed line into a file? the code looks like this:
import boto3
import botocore
s3 = boto3.resource('s3')
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
obj = s3.get_object(Bucket=bucket, Key=key)
for line in obj['Body'].read().decode('utf-8').splitlines():
if 'PCSI' in line:
newLine = line \
.replace('E','') \
.replace('--','') \
.replace('<',';') \
.replace('>','') \
.replace('9_PCSI','') \
.replace('[','') \
.replace('|',';') \
.replace(']',';') \
.replace(' ','')
when I print the results it works fine and gives me the format of each line I want.
One idea I had was to write into a file in the /tmp folder of lambda:
newFile = open('/tmp/pcsi.txt','a')
and modify the code like
...
if 'PCSI' in line:
newFile.write(line \
.replace(.....
but I do not know if this works as I can not "see" the file in the /tmp. I also struggled with downloading it again to S3. Is there a way to write each line into a file and store it to S3?