16

I am aware of some potential solutions, but they all feel awful to me.

  1. In pipeline (github actions), run a one-off task on fargate to migrate DB before the deployments.
  2. Publish some kind of cloudformation event as a deploy hook and use it as a lambda trigger, and lambda will do the migration.
  3. Leverage laravel crons with onOneServer() to continually check if a migration is necessary
  4. [problem, no good] docker entrypoint command to run db migrations on task startup. (Bad, all instances will try to migrate the DB in quick succession, probably)

Each of these has various things I dislike.

  1. This one will migrate the DB, and then deploy. If the deploy fails, the DB is now migrated and to fix it I would have to somehow run a db migration rollback in pipeline after a failure. Also it feels really bad to rely on one-off tasks through pipeline in general.
  2. This one has more moving parts than I think should be necessary. Multiple points of failure: cloudformation event, lambda function failure. Also the deploy seems like it would be the event trigger, which means the deploy could be a success, but the lambda db migration fail, and the pipeline be unaware. Thus requiring a manual rollback of the deployment.
  3. This one feels hacky, and yet, seems to have the least amount of moving parts and least entropy. The major downside however is that, I think this would essentially require a 1/min cron spamming php artisan migrate (nothing to migrate), so that it catches deploys with migrations. The benefit is that with onOneServer(), it should actually solve the concern: we don't want multiple instances to all try to migrate the database on a deploy, just one. This has a big benefit of linking the deploy and migrations, so if deploy fails, there is no migration yet, and if the migration fails, at least it is easier to rollback the task to the older task version quite easily. Less moving parts are involved. The resource overhead of spamming php artisan migrate each minute and it have nothing to rollback, should be very small/not noticeable resource usage. But, it still bothers me very much how inefficient it is resource-wise.

Is there another solution out there? I am anticipating someone may suggest to me to control instances with env variables, but I also don't want to do that. If we deploy and have 3 instances running, they should all be updated and they are all 'the same' instance states. Otherwise, I'd have to make a 2nd service that also runs 24/7 to check for migrations as its own special job. I guess that is solution 5:

  1. Have a separate service task from the request-handling instances that runs 24/7 and whose only job is to run crons and migrate the DB after deploys. This also sucks though because you have a task running 24/7 to check for deploys, which are not so frequent.

I think solution 3 is my preferred solution, despite its resource overhead. I would love to hear some insight from others on this problem. I am in a situation where this pipeline really should be easy for non-ops-people to deal with if I get hit by a bus. Keeping it simple inside of the laravel app code seems like it fits that requirement. I know there are scheduled task / cloudformation event solutions, but keep in mind I have a big goal of as little entropy / moving parts as possible, within reason.

I have read every single blog post and every single google hit I can find on this subject, and have not found a clear obvious answer. I've come up with solution 3 myself and don't see it suggested anywhere.

Possibly automated DB migrations in all circumstances is too ambitious, and a manual process should be developed and followed. Especially if a DB migration contains a change which won't work on the old instances -- migrating it before deploy would break those temporarily.

Ermiya Eskandary
  • 15,323
  • 3
  • 31
  • 44
Base Desire
  • 485
  • 1
  • 5
  • 15
  • 1
    Hey, were you able to come up with something better in the end? I'd love to start a bounty on this question, maybe it will get a bit more exposure. – Gilbert Nwaiwu Apr 02 '22 at 16:26
  • 1
    I lean towards solution `1` now. I would tell developers: "If at all possible, do everything in your power to make migrations non-breaking, and backwards compatible". This is annoying, but it is what it is. This way they won't ship changes that break old versions of the app. So even if migration works but deploy fails later, the app still runs. The migration could be run either like I said in idea `1` above, or possibly in a deploy step of an actual container booting up (but this means having to figure out making it run on only 1 instance, if multiple identical containers are deploying). – Base Desire Apr 03 '22 at 01:19
  • 1
    (I ran out of characters) I'm not sure, it is possible the DB could lock itself when 1 of the containers is deploying and a migration is running, if multiple containers all try to migrate the DB. If some containers could skip the migration if the DB is locked and just skip that step that might work too. It would just need to be done in such a way that 5 identical containers being deployed don't break if they cannot migrate the DB and gracefully skip, and also so that migrations are not run more than once if they are not locked when the 1st container tries to migrate (in quick succession). – Base Desire Apr 03 '22 at 01:21
  • 1
    This is all speculation/theory, the one I went with was `1`. I speculate/theorize about the option of it going in deployment of containers (like in docker containers startup) but haven't practiced it myself. – Base Desire Apr 03 '22 at 01:23
  • 1
    It means we have to just accept & live with the problem of a migration finishing and the deploy itself failing, hence the importance of avoiding breaking changes if possible in migrations. . . – Base Desire Apr 03 '22 at 01:25
  • 1
    I think the 1st idea is also the least troublesome one. Its preferable to update the db first and in such a way that your old code is still compatible with the new structure. I mean you get the side effect of not really knowing what your current db looks like by just looking at your source, but its better than the other 2 ideas IMO. I think I will still raise the bounty, its an interesting problem. – Gilbert Nwaiwu Apr 03 '22 at 18:27

2 Answers2

2

Running database migrations before deployment (option 1) is the industry standard & what you should be doing, regardless of your cloud platform, database engine or application language.

The short and long answer is that DB migrations are there for fault tolerance - if for whatever reason you need to reverse your deployment, you know exactly what has happened to be able to roll back.

Most (if not all) ORMs e.g. Entity Framework for .NET or Liquibase for Java allow you to roll back the migration with a simple command. The Eloquent ORM included with Laravel for PHP also allows you to roll back migrations using php artisan migrate:rollback.

A step in your pipelines before deployment should apply the database migrations. If deployment then fails for any reason, you should manually roll back.

This is the intersection of your application & the database at an infrastructure level - unfortunately, expect some manual work to be needed if something fails.

Ermiya Eskandary
  • 15,323
  • 3
  • 31
  • 44
-9

use database migration:

php artisan migrate:fresh

this will drop all tables and create again

php artisan migrate:refresh

this will drop all tables

php artisan migrate:rollback

this will rollback tables