6

I have a Rails 3 app that I'm developing locally and deploying on Amazon's Elastic Beanstalk for production. There's several places in my app where images can be uploaded through HTML forms. After upload, I'm then sending the files to S3 for storage. I have no trouble with this workflow while developing locally, but in production, I'm getting a 500 Internal Server Error response during the upload (I'm fairly sure it's before any communication with S3).

I ssh'ed into my EC2 instance found traces of the error in /var/app/support/logs/passenger.log. Here's the line that's generated during upload.

2013/03/30 00:58:52 [crit] 1723#0: *196227 open() "/tmp/passenger-standalone.1645/client_body_temp/0000000014" failed (2: No such file or directory), client: ip_address, server: _, request: "POST /admin/users/1 HTTP/1.1", host: "www.my_domain.com", referrer: "https://www.my_domain.com/admin/users/1/edit"

Does anyone have any words of wisdom as to why I'm not able to upload a file to Elastic Beanstalk from my Rails?

Thanks in advance for your help!

ajporterfield
  • 1,009
  • 2
  • 10
  • 23
  • For me this was a terrible rookie mistake. I needed to bundle install locally. On heroku, I'm used to getting an error message for this kind of thing. – colllin Jan 09 '14 at 19:30

3 Answers3

9

After some research, I believe the problem is that a daily cronjob (/etc/cron.daily/tmpwatch) is removing the passenger-standalone.* directory that's critical for file uploads.

I was able to get uploads working again by restarting the app server. For a more long term fix, I updated the tmpwatch script to exclude the pattern '/tmp/passenger*' (see below).

#! /bin/sh
flags=-umc
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
        -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \
        -X '/tmp/hsperfdata_*' -X '/tmp/passenger*' 10d /tmp
/usr/sbin/tmpwatch "$flags" 30d /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
    if [ -d "$d" ]; then
        /usr/sbin/tmpwatch "$flags" -f 30d "$d"
    fi
done

Is there another solution that anyone else has found for this issue? I'm not a sys admin guy (which is a big reason why I chose to use Elastic Beanstalk), so I would prefer to not hack with the EC2 instance if at all possible - especially when my app scales and more instances are spawned.

ajporterfield
  • 1,009
  • 2
  • 10
  • 23
  • 3
    I've taken your code and added it to `.ebextensions/server-update.config` in my app like this http://pastie.org/8463846 so new instances get it as well – Chris Danek Nov 07 '13 at 21:50
1

Hopefully fixed in the next version :)

http://code.google.com/p/phusion-passenger/issues/detail?id=654

Christian
  • 39
  • 1
  • 3
0

Have you considered direct uploading images into s3 instead? Uploads to the server in Elastic Beanstalk kind of go against the spirit of the thing (file can be deleted if the instance vanishes, the next request could be received by a different instance, etc). I'm not a sys-admin guy either and I'm using elastic beanstalk for the same reason.

Basically I'm trying to say that by moving to uploading directly into s3 you should be able to leave your servers serving, your database basing the data and your file store store you files. Then hopefully you can be immune from this nonsense :)

Will
  • 805
  • 1
  • 9
  • 26
  • 1
    This happens even before Rails gets any action. Apparently passenger saves uploaded files in this tmp dir before calling Rails. – Lasse Skindstad Ebert Sep 04 '13 at 13:18
  • Downvote because of above comment. Passenger allocates a temporary directory for the uploaded file to go. This doesn't answer the question. – joslinm Dec 17 '13 at 16:33
  • The file doesn't have to go to your web server *at all*, you can create a form which uploads the file *directly* to s3. This means your server has less to do and you're not passing the file through the load balancer etc. – Will Mar 13 '14 at 11:00