1

I'm trying to upload image png/jpg less than 10 kilobytes to s3 amazon. The upload succeed but the file uploaded and stored with 0 bytes.

When i'm trying to see the image in the link provided by s3- i get blank.

If i upload image more bigger than 10 kilobytes size it's ok.

Can someone have any idea what is the problem please?

file_name = account[:img_token] + File.extname(img.original_filename)
file = Tempfile.new(file_name, encoding: 'ascii-8bit')
file.write(img.read)
path = file.path
bucket_name = 'bucket'
s3 = AWS::S3.new(access_key_id: ENV['S3_ACCESS_KEY'], secret_access_key: ENV['S3_SECRET_ACCESS_KEY'])
link = 'https://s3-eu-west-1.amazonaws.com/' + bucket_name + '/' + file_name
key = file_name
object = s3.buckets[bucket_name].objects[key].write(file: path, acl: 'public-read')
John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
Zvi
  • 577
  • 6
  • 19
  • I doubt size is the reason. I've been uploading (programmatically) all sorts of sizes (including less then 10K). Never had any issues. I think sample code will be useful. – Seva May 30 '16 at 20:40
  • @seva Thanks, i added the code. – Zvi May 31 '16 at 06:29
  • @seva When i'm trying to upload manually inside s3, it's ok – Zvi May 31 '16 at 08:01
  • I've been uploading both, manually and by running some code I wrote. But I used go language. What I'm saying - AWS interface is definitely fine. It must be an issue with the uploading code. Though not necessarily with your code - it might be the library you use. I'd suggest adding language to the list of tags. Is it Ruby? If yes, it seems officially supported, so must be working correctly. Unfortunately out of my league. But I'm sure there are plenty of people on stackoverflow using ruby with aws. It's rather about providing as many relevant details us possible. – Seva May 31 '16 at 11:27
  • 1
    Again, I'm not strong in Ruby. But don't you need to close() file after writing? It may be that your file is not written to the disk yet (still buffered in memory) when you're already trying to upload it. – Seva May 31 '16 at 11:30
  • @seva Thanks for your answer. I just moved the file.close() before the uploading to s3. But it's strange, because it's only occur with images less than 10kilobytes. Anyway, it works! Thanks! – Zvi May 31 '16 at 15:03
  • 1
    It's likely happening with all sizes - you may lose last chunk of data in the buffer. It's just much more noticeable on small ones. For example, with buffer size of 8K, you lose 100% on all images smaller then 8K. But on large files, you lose (last) 4K on average. So, given that everyone counts Ks differently these days, if your file is few 100s K, you won't notice the difference. To be fair, some libraries do write all data at once if it received in one chunk and exceeds buffer size. But that's rarely the case. Most write in buffer sized chunks. – Seva May 31 '16 at 19:39

2 Answers2

2

It looks like you are passing a file to the SDK that is seeked to the end of the file. I suspect the SDK is calling #read and getting nil back. Try rewinding the file first.

Trevor Rowe
  • 6,499
  • 2
  • 27
  • 35
0

According to this answer

Write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited.

Community
  • 1
  • 1
The scion
  • 1,001
  • 9
  • 19