I have a Flask application with a PostgreSQL database, Alembic migrations and SQLAlchemy.
Recently I started writing an integration test against the database. I import a model, say Item, that is mapped to table "item" and doing "from models import Item" triggers construction of the SQLAlchemy metadata for my tables.
In my test setup, I have
def setUpClass(cls):
try:
os.remove('testdb.db:')
except:
pass
#Run db migrations
global db_manager
db_manager = DatabaseManager()
alembic_cfg = Config("./alembic.ini")
alembic_cfg.attributes['db_manager'] = db_manager
command.upgrade(alembic_cfg, "head")
This results in
sqlalchemy.exc.InvalidRequestError: Table 'item' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
I have debugged this to the metadata object being the same one between the calls and thus accumulating the "item" table twice to its tables array.
I have another pretty much identical application where this setup works, so I know it should work in theory. In this other application, metadata objects in the import and in the upgrade phases differ so the tables array is empty when alembic runs the upgrade, and hence there is no error.
Sorry I can't provide actual code, work project. Might be able to construct a minimal toy example if I find the time.
If I understood where the metadata actually gets created inside SQLAlchemy, I might be able to track down why alembic gets a clean metadata instance in the working app, and not in the problem app.
In the working application, "extend_existing" is not set and I'd rather not invoke some hack to mask an underlying issue.