11

I'm using Amazon's Elastic Beanstalk and Django 1.8.2. Here is my container commands,

container_commands:
  01_wsgipass:
    command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
  02_makemigrations:
    command: "source /opt/python/run/venv/bin/activate && python manage.py makemigrations --merge --noinput"
    leader_only: true
  03_migrate:
    command: "source /opt/python/run/venv/bin/activate && python manage.py migrate --noinput"
    leader_only: true

For some reasons the migrate command is being killed. All migrations are working fine even with a fresh database in my local. But following is the error appearing on eb-activity.log.

Synchronizing apps without migrations:
  Creating tables...
  Running deferred SQL...
  Installing custom SQL...
  Running migrations:
  Rendering model states.../bin/sh: line 1: 21228 Killed                  python manage.py migrate --noinput
   (ElasticBeanstalk::ExternalInvocationError)

Note: The same container commands were working fine without any issues earlier at Elastic Beanstalk. I tried with --verbose 3 with migrate command but didn't get any other debug messages.

Any solutions? Thanks in advance.

Gobi Dasu
  • 459
  • 1
  • 6
  • 22
Babu
  • 2,548
  • 3
  • 30
  • 47
  • Two thoughts: do you get any more info in [cfn-init.log](http://qpleple.com/install-python-packages-on-elastic-beanstalk/) and have you looked at changing your [command timeots](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/events.common.commandtimeout.html)? – Peter Brittain Jul 05 '15 at 22:32
  • Yes, my timeout is already 1000 seconds. It doesn't look like a timeout error. I checked the error from /var/log/cfn-init-cmd.log, it shows the same error. No detailed debug logs available. – Babu Jul 06 '15 at 09:01
  • If you're getting no errors or other useful diagnostics from EBS, maybe something else is doing it? Have you considered that it might be the OS - e.g. are you a victim of [OOM killer](http://stackoverflow.com/questions/726690/who-killed-my-process-and-why)? – Peter Brittain Jul 07 '15 at 23:08
  • Okay. There were some problems with my migrations and I fixed it manually by ssh. But after that I ended up with this issue. http://stackoverflow.com/questions/31262031/aws-ebs-deploy-update-environment-operation-is-complete-but-with-errors-for-m?noredirect=1#comment50540545_31262031 – Babu Jul 08 '15 at 05:21
  • 1
    So my point is, AWS is not developer friendly when it comes to **troubleshooting** with the poor logging mechanism. Or if there is one log file to log the *ExternalInvocationError* command errors, it is not documented anywhere. – Babu Jul 08 '15 at 05:22
  • @Babu check your database state. Most probably there is some blocking, the database table might be locked. Try restarting the database server. – Aamir Rind Jul 08 '15 at 10:00
  • @Babu Since everything working fine on local machine, and chances that EBS having problems is very less likely thing. So I think as Peter Brittain suggest, check your RAM usage while executing these commands( Might be the OOM Killer doing it ). As a quick solution increase your instance RAM and re run your process. – Haridas N Jul 10 '15 at 07:50
  • **Note**: You should never run `makemigrations` as part of the deployment. Migrations should be part of version control and should be committed and only `migrate` should be run on deployment. You simply risk breaking subsequent deployments (The migration history no longer matches with the newly generated migration files) if you do it this way. – Abdul Aziz Barkat Sep 08 '22 at 17:31

2 Answers2

7

AWS is not developer friendly when it comes to troubleshooting with the poor logging mechanism.

As an avid AWS user who recently eval'd EBS for a Django project, I totally agree with this for the same reasons. I ended up going with Heroku for this and reasons I won't go into but I think the following pattern helps either way.

The steps to prepare your prod environment can go in different places; they don't have to happen on your target web-server environment.

I ended up pulling my make/migrate tasks out of my deployment automation and into tasks that happen just before it. The only things that happen in my target web-server environment are directly related to the code on that server.

In other words: I recommend if you have a CI tool for builds/tests, you pull your make/migrate and any other prep to the environment outside your webserver into your deployment pipeline. Something like:

  • Checkout code
  • Run Tests (including make/migrate on ephemeral database to test it if possible)
  • Put app in maintenance mode (or similar, if required)
  • Snapshot database
  • Make/Migrate on production
  • Deploy
  • If deploy fails, rollback DB, rollback app.

Then you are separating the concerns of automation for your app server and automation for the rest of your prod environment and letting your CI handle that. You could handle them in the same place, but clearly its a bit clunky to do that using EBS's facilities.

alph486
  • 1,209
  • 2
  • 13
  • 26
2

My migrations were being killed because the memory reserved in the Dockerrun.aws.json file was too low. The example provided in the documentation gave "128" as a sample value, and I had just used that. Increasing the value for "memory" resolved the problem.

e.g. Dockerrun.aws.json excerpt:

  "containerDefinitions": [
    {
      "name": "php-app",
      "image": "php:fpm",
      "essential": true,
      "memory": 512,
      // ... etc. ...
    }
  ]
kaapstorm
  • 1,101
  • 7
  • 8
  • I have been hit with that too. Usually while testing we use the small or micro instances but they are not enough to run the migrate scripts. The same applicable for running node.js apps. So it's recommended to use medium instances to see if the issue is fixing up. – Babu Dec 20 '18 at 14:51