7

I've created a few migrations already using Alembic, it's a really interesting tool.

Using it on a new DB would be straightforward enough, however our usecase is to apply these migrations to existing DB's, not only to bring them "up to speed" but to make sure that for example, should a table or column be altered through the phpmyadmin front-end or if a table exists already, that it can correct/created.

This would be perfect, as the MigrationID created for that script would then be able to be applied to any DB we have, correcting any inconsistencies without creating duplicates by skipping over what it deems to be correct.

As mentioned, a fresh database built using Alembic would able to be fixed via downgrade and reapplication, but I'm wondering if this can be done to an existing DB where Alembic is applied post.

For reference, here's the first migration code example if it's somehow useful.

"""create account table

Revision ID: bd4ec9e8afe8
Revises: 
Create Date: 2019-10-29 15:25:39.736234

"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects.mysql import TINYINT


# revision identifiers, used by Alembic.
revision = 'bd4ec9e8afe8'
down_revision = None
branch_labels = None
depends_on = None


def upgrade():

    op.create_table(
        'account',
        sa.Column('id', sa.Integer, primary_key=True, nullable=False),
        sa.Column('title', sa.String(150), nullable=False),
        sa.Column('description', sa.Text, nullable=False),
        sa.Column('uploads_allowed', TINYINT(), nullable=True),
        sa.Column('viewers_contact_details', TINYINT(), default=0),
    )

    op.alter_column('account', sa.Column('id', sa.Integer, nullable=False, autoincrement=True))



    # op.execute("""
    # CREATE TABLE `account` (
    # `id` int(11) NOT NULL,
    # `title` varchar(150) NOT NULL,
    # `description` text,
    # `uploads_allowed` tinyint(1) DEFAULT '0',
    # `viewers_contact_details` tinyint(1) DEFAULT '0'
    # );
    # """)




def downgrade():
    op.drop_table('account')
Lukabratzee
  • 137
  • 1
  • 10
  • Some relevant info can be found here: https://groups.google.com/forum/m/#!topic/sqlalchemy-alembic/2HJ9J6PiQsk – SuperShoot Oct 31 '19 at 10:36
  • Look here https://stackoverflow.com/a/55058566/3185459 – RomanPerekhrest Oct 31 '19 at 10:43
  • @Roman I'd come across that thread before, though I was unable to get it to work. Inspector.from_engine(conn) is not recognised as a valid module and does not run. This is why I'm trying to find an alternate path. – Lukabratzee Oct 31 '19 at 10:59
  • @Supershoot thank you for the link. It seems that this would be a great solution, though I'm unsure of this line: A straightforward way to do this would be to take the MetaData object from your application, after all the Table metadata is present in it for your app. Is this saying that it will read the entire DB and create it's own schema? – Lukabratzee Oct 31 '19 at 11:07
  • 1
    Alembic compares your metadata with the database, if table not in metadata but in database, it will try to drop the table, if table defined in metadata but not in database, it will generate a create table statement. So you need to get the existing state of your DB in metadata - either build models to represent all existing tables ( something like https://pypi.org/project/sqlacodegen/ can help) or metadata reflection as discussed in my previous link. – SuperShoot Oct 31 '19 at 11:51
  • @Supershoot wow! I used sqlacodegen, that was fantastic. I outputted the file and called it *schema.py*. Using autogenerate, how can I use that now detect changes and or generation automatic migrations? I read the auto generate tutorial, but this gave me pause. from myapp import mymodel target_metadata = mymodel.Base.metadata https://alembic.sqlalchemy.org/en/latest/autogenerate.html – Lukabratzee Oct 31 '19 at 15:25
  • @supershoot I've finally got it all working and figured out I'd select you as best answer but I can only updoot your comment. sqlacodegen was the answer to getting a model, and generating a .py file from that and placing it inside a folder, and sys.path including it into env.py fixed everything. Thank you! – Lukabratzee Nov 01 '19 at 13:51

2 Answers2

2

So my problem was that I couldn't detect db changes without using if/else and try/except in every single migration. Whilst this worked, it could get confused and break entirely if the query I was using to check the db wasn't correct or didn't return exactly what was expected.

The solution was to create a model of my db, and use that as the model from which Alembic uses --autogenerate.

@Supershoot pointed me toward this tool. It creates a model of whatever db you point it at. Using this, I was able to then modify my env.py like so:

from logging.config import fileConfig

from sqlalchemy import engine_from_config
from sqlalchemy import pool

from alembic import context

import sys
from os.path import abspath, dirname
sys.path.insert(0, "/home/user/gitprojects/myproject/alembic/alembic/models")
import base


# this is the Alembic Config object, which provides
# access to the values within the# from myapp import mymodel
# target_metadata = mymodel.Base.metadata .ini file in use.
config = context.config# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
# Interpret the config file for P# from myapp import mymodel
# target_metadata = mymodel.Base.metadataython logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = base.Base.metadata
# target_metadata = None

My dir structure looks like this:

├── alembic
│   ├── env_manualmig.py
│   ├── env.py
│   ├── models
│   │   ├── base.py
│   │   └── __pycache__
│   │       └── base.cpython-37.pyc
│   ├── __pycache__
│   │   └── env.cpython-37.pyc
│   ├── README
│   ├── script.py.mako
│   └── versions
│       ├── bd4ec9e8afe8_create_account_table.py
│       ├── f05b82b25f44_base.py
│       ├── fada1a35f790_add_a_column.py
│       └── __pycache__
│           ├── bd4ec9e8afe8_create_account_table.cpython-37.pyc
│           ├── f05b82b25f44_base.cpython-37.pyc
│           └── fada1a35f790_add_a_column.cpython-37.pyc
├── alembic.ini
├── Pipfile
├── Pipfile.lock
└── sa

Using the command sqlacodegen mysql://u:p@host/name --outfile name_of_file.py

And then: alembic revision --autogenerate -m "base" generated an automatic migration script after having checked the model was correct.

This then compared the metadata it had, to the db it was pointed to and correctly detected missing or altered columns. You are greatly encouraged to check that the autogenerated migration is correct however, as there may be bugs or other discrepancies that alembic may not have picked up on.

Lukabratzee
  • 137
  • 1
  • 10
1

You can create a new revision using :

$ alembic revision -m "some description for the revision"

Then a revision file with ID will be created.

Populate the upgrade and downgrade functions as per your changes, then run the below command :

$ alembic upgrade head

This will make changes to the existing database.

  • 1
    Thank you for your reply. I do that to make my revisions and use head to upgrade to the latest revision. However, my problem stems from issuing a create table command for example, when a table may already exist. I'd like Alembic to check "is table there already? Yes? Skip" and I'm not sure it does that by default. – Lukabratzee Oct 31 '19 at 10:46