1

I have an AWS lambda function that takes in multipart form data, parses it for a document (can be .pdf, .doc, or .docx) and then uploads that to an S3 bucket. I'm receiving the form data, parsing it and seemingly uploading it just fine. However when I go to download the file it cannot be opened if it's .doc or .docx, if it's .pdf it is just a blank page. Essentially the files are being corrupted somewhere in the process pipeline. At this point I really don't know what I'm doing wrong. The data transfer steps are as such:

  1. Form is uploaded client-side and base64 encoded in FormData object (JS)
  2. Form is sent via jQuery ajax

form.js

$.ajax({
    type: 'POST',
    processData: false,
    url: `${API_BASE}/applications`,
    contentType: false,
    data: formData,
    success: (data) => {
        isFormValid = true;
        callback();
    },
    error: (err) => {
        console.log(err);
    }
});
  1. The corresponding Python API (built with Chalice) route handles it

route.py

import arrow
import boto3
import cgi
from io import BytesIO
from app import app, verify_token
from chalice.app import Request
from chalicelib.core.constants import aws_credentials

s3_path: str = 'tmp/'
s3_metrics_file: str = 'metrics.json'
s3_metrics_key: str = s3_path + s3_metrics_file

# Just testing different ways to instantiate client
s3_client = boto3.client("s3", **aws_credentials)
s3_resource_client = boto3.resource("s3", **aws_credentials)

company_name = 'company'

def _get_parts(current_request) -> dict:
    """Parse multipart form data"""
    raw_file: bytearray = BytesIO(current_request.raw_body)
    content_type = current_request.headers['content-type']
    _, parameters = cgi.parse_header(content_type)
    parameters['boundary'] = parameters['boundary'].encode('utf-8')
    parsed: dict = cgi.parse_multipart(raw_file, parameters)

    return parsed


@app.route('/applications', cors=True, content_types=['multipart/form-data'], methods=['POST'])
def create_application() -> dict:
    """Creates an application object, stores it and sends an email with the info"""
    current_request: Request = app.current_request

    # Resume has to stay as bytes
    body: dict = {k: v[0].decode() if k != 'resume' else v[0] for (k, v) in _get_parts(current_request).items()}
    resume: bytes = body.get('resume', None)
    file_name: str = body.get('file_name')
    portfolio: str = body.get('portfolio', None)
    file_name_new: str = f'{first_name}_{last_name}_{arrow.utcnow().format("YYYY-MM-DD")}.{file_name.split(".")[-1]}'
    file_location: str = f'https://s3.amazonaws.com/{company_name}-resumes/{file_name_new}' if resume else None

    s3_client.put_object(Body=resume, Bucket=company_name, Key=file_name_new)
    # Different way to do the same thing
    # s3_resource_client.Bucket('52inc-resumes').put_object(Key='test.jpg', Body=resume)

There are no errors occurring either client or server side. It seems like it's an encoding translation issue, going from base64 to bytes to a file on s3. How can I correct this issue?

DjH
  • 1,448
  • 2
  • 22
  • 41

1 Answers1

1

I ended up solving this by simply using json with the base64 string, no more multipart/form-data or JavaScript FormData object. From there I could simply parse out the base64 string and send that up to S3. Would still be interested in if there is a way to use multipart/form-data for this though.

DjH
  • 1,448
  • 2
  • 22
  • 41