I have a Remix full-stack application that uses an S3 bucket to store and retrieve images. When running the application locally, uploading images to S3 from the application works as expected. However, after deploying the application to Fly.io any attempt to upload an image times out, eventually showing a 502 Bad Gateway error and the following text in the fly monitoring terminal: "could not make HTTP request to instance: connection error: timed out". I have tried altering permissions and other settings on the S3 dashboard but nothing seems to help.
This is my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::BUCKETNAME/*"
}
]
}
This is my CORS configuration on the S3 bucket:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
The bucket currently has public access enabled (Block all public access off), and Object Ownership is set to "Bucket owner enforced". Any help would be appreciated.
My action has a bunch of other stuff in it as well, but here's the part that calls the upload function:
const uploadHandler = unstable_composeUploadHandlers(
async ({ name, contentType, data, filename }) => {
if (name !== 'image') {
return undefined;
}
const dataArray1 = [];
for await (const x of data) {
dataArray1.push(x);
}
const stream = Readable.from(dataArray1);
const uploadImageService = new UploadImageService(ctx);
try {
return await uploadImageService.uploadImage(
stream,
folder,
`${id}-${itemName}.jpg`
);
} catch (error) {
throw serverError({ code: 'image_upload_failed' });
}
},
unstable_createFileUploadHandler()
);
let image: FormDataEntryValue | null = null;
const multipartFormData = await unstable_parseMultipartFormData(request, uploadHandler);
image = multipartFormData.get('image');
This is the service method:
async uploadImage(file: Readable, folder?: string, tag?: string): Promise<string> {
const { s3 } = getS3();
return await uploadImageToS3(s3, 'BUCKETNAME', file, folder, tag);
}
And this is where it's actually uploaded:
export const uploadImageToS3 = async (
s3: AWS.S3,
s3Bucket: string,
buffer: any,
folder?: string,
tag?: string
): Promise<string> => {
const filename = folder ? `${folder}/${tag || uuid()}` : tag || uuid();
const params = {
Bucket: s3Bucket,
Key: filename,
Body: buffer,
ContentType: 'image/jpeg',
};
try {
// this is where it times out, it never reaches the console.log() success message below when the app is deployed
const s3UploadResponse = await s3.upload(params).promise();
console.log(`File uploaded successfully to ${s3UploadResponse.Location}`);
return new URL(s3UploadResponse.Location).pathname.substring(1);
} catch (error) {
console.error(error);
throw new Error('File upload failed.');
}
};