I've been dealing with very slow upload speeds from my application to S3. I have a rails application running in a single docker environment on Elastic Beanstalk and a specific bucket that stores files created by the user. Both are in the same region and availability zone. The files that are being uploaded are very small (< 1kb) text files that are taking 40 seconds on average to upload. This seems ludicrous to me considering I am not even transferring outside of the datacenter. Reading the files are near instant as is moving and deleting the files. Further more, 40 seconds seems to be the base amount of transfer time. I've tested this by uploading a 10 byte document and a 29kb document which both took the same amount of time.
I'm using the ruby aws-sdk to perform the upload that looks like this:
file = Tempfile.new(file_name)
file.write(@content)
key = "resources/#{file_name}"
s3 = Aws::S3::Resource.new(region: ENV["AWS_REGION"])
obj = s3.bucket(bucket_name).object(key)
logger.info "** Uploading file #{file_name} to S3"
logger.info " - File size is #{file.size} bytes"
start_time = Time.now.to_i
obj.upload_file(file)
end_time = Time.now.to_i
seconds = end_time - start_time
elapse = Time.at(seconds).utc.strftime("%H:%M:%S")
logger.info "** File upload took #{elapse} to complete"
and I'm seeing output like this:
** Uploading file untitled-NUB3eAURYspbpdaBqu.md to S3
- File size is 23 bytes
** File upload took 00:00:41 to complete
I've exhausted my research ability on this issue after reading hundreds of other posts on SO, the aws forum and others. Any insight into how I can improve this would be greatly appreciated.
Update: added that I was using a Tempfile
object and not a file path string. It was not clear from my previous code example.
RequestTimeout