We have a postgres database with two schemas one for dev and one for prod. So we would like to migrate changes from our sqlalchemy orm first to dev, test it and then migrate them to prod. But Albemic is not really working with this.
I started by just changing the SCHEMA in my sqlalchemy orm file. This works well for one schema. (Except for that it creates a huge commit everytime even though I only add or delete a column. The Commit is huge because it always drops all the constraints and adds them right away. Is there some way to deactivate this? This is not a huge Problem but the commit is practically unreadable because of this) But if I then change the schema in the ORM file my and run albemic upgrade head again the tables do get correctly created in the second schema but there is not albemic_versions file. So all the changes I then do in the first schema and then try to also do on the second one lead to errors such as Database not up to date.
I also tried the approach in the DOCS: https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases So I also removed the SCHEMA from my table definitions but then it didn't create any tables since it got the error can't find Table None.MyTable (Since apparently there is no schema defined or whatever)
Trying the changes from this here Alembic support for multiple Postgres schemas Lead to albemic again working in Schemas it wasn't supposed to touch. I would appreciate any help.
My env.py file
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
import sys
sys.path.append('..\\model')
from model.model import metadata, SCHEMA
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
future=True
)
current_tenant = context.get_x_argument(as_dictionary=True).get("tenant")
with connectable.connect() as connection:
connection.dialect.default_schema_name = current_tenant
context.configure(
connection=connection,
target_metadata=target_metadata,
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
A Class definition from orm
SCHEMA = 'my_scheme'
metadata = MetaData(schema=SCHEMA)
Base = declarative_base(metadata=metadata)
class CertScheme(Base):
__table_args__ = {'schema': SCHEMA}
__tablename__ = 'CertScheme'
id = Column(BigInteger, primary_key=True)
name = Column(String(20), nullable=False)
company = Column(String(70))
company_url = Column(String(100))
description = Column(Text)