0

A pet example to demonstrate what I want.

Say I have a table User that looks like this:

class User(Base):
    __tablename__ = 'user'

    id =  Column(Integer, primary_key=True)
    name = Column(String, nullable=False)
    created_on = Column(DateTime, default=datetime.utcnow)

I'm working on a feature and decide it is no longer relevant to track the created_on datetime so I remove that column and generate an auto revision. I run alembic upgrade head and voila. A while later I actually realise that we still need the created_on column so I decide to add it back, create another revision and upgrade it.

The feature is done and I want to move it to production. I replace the dev database url stored in sqlalchemy.url with my production database url. If I now run alembic upgrade heads it will 'replay' the last two revisions and delete the column (including all the data) and then add it back (column will now be filled with NULLS). What I instead want is no changes to the created_on column made.

I hope this example demonstrate why this is not a viable setup for a dev/prod split using Alembic. How can I achieve what I want (i.e. not replaying all changes, just the changes necessary to reflect the latest datamodel)? Alternatively, how can I improve this dev/prod split setup in Alembic?

RogerKint
  • 454
  • 5
  • 13
  • Edit the revision scripts to remove the `op.drop_table()` from the earlier revision and the `op.create_table()` from the later revision. – Gord Thompson Jan 20 '23 at 12:41
  • @GordThompson Thanks for the response. Is this really the best way? Then it couldn't be used in an automated CI/CD pipeline for example. – RogerKint Jan 20 '23 at 12:48

0 Answers0