Generally do not post here, so forgive me if anything is not up to code, but I have built a micro-service to run database migrations using flask-migrate/alembic. This has seemed like a very good option for the group I am working with. Up until very recently, the micro-service could be deployed very easily by pointing to different databases and running upgrades, but recently, the flask db
upgrade command has stopped working inside of the docker container. As can be seen I am using alembic-utils
here to handle some aspects of dbmigrations less commonly handled by flask-migrate like views/materialized views etc.
Dockerfile
:
FROM continuumio/miniconda3
COPY ./ ./
WORKDIR /dbapp
RUN conda update -n base -c defaults conda -y
RUN conda env create -f environment_py38db.yml
RUN chmod +x run.sh
ENV PATH /opt/conda/envs/py38db/bin:$PATH
RUN echo "source activate py38db" > ~/.bashrc
RUN /bin/bash -c "source activate py38db"
ENTRYPOINT [ "./run.sh" ]
run.sh
:
#!/bin/bash
python check_create_db.py
flask db upgrade
environment_py38db.yml
:
name: py38db
channels:
- defaults
- conda-forge
dependencies:
- Flask==2.2.0
- Flask-Migrate==3.1.0
- Flask-SQLAlchemy==3.0.2
- GeoAlchemy2==0.12.5
- psycopg2
- boto3==1.24.96
- botocore==1.27.96
- pip
- pip:
- retrie==0.1.2
- alembic-utils==0.7.8
EDITED TO INCLUDE OUTPUT:
from inside the container:
(base) david@<ip>:~/workspace/dbmigrations$ docker run --rm -it --entrypoint bash -e PGUSER="user" -e PGDATABASE="trial_db" -e PGHOST="localhost" -e PGPORT="5432" -e PGPASSWORD="pw" --net=host migrations:latest
(py38db) root@<ip>:/dbapp# python check_create_db.py
successfully created database : trial_db
(py38db) root@<ip>:/dbapp# flask db upgrade
from local environment
(py38db) david@<ip>:~/workspace/dbmigrations/dbapp$ python check_create_db.py
database: trial_db already exists: skipping...
(py38db) david@<ip>:~/workspace/dbmigrations/dbapp$ flask db upgrade
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 41f5be29ae44, initital migration to generate tables
INFO [alembic.runtime.migration] Running upgrade 41f5be29ae44 -> 34c067400f6b, add materialized views <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b -> 34c067400f6b_views, add <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b_views -> b51d57354e6c, add <. . .>
INFO [alembic.runtime.migration] Running upgrade b51d57354e6c -> 97d41cc70cb2, add-functions
(py38db) david@<ip>:~/workspace/dbmigrations/dbapp$
As the output shows, flask db upgrade
is hanging inside the docker container while running locally. Both environments are reading in the db parameters from environment variables, and these are being read correctly (the fact that check_create_db.py
runs confirms this). I can share more of the code if you can help me figure this out.
For good measure, here is the python script:
check_create_db.py
import psycopg2
import os
def recreate_db():
""" checks to see if the database set by env variables already exists and
creates the appropriate db if it does not exist.
"""
try:
# print statemens would be replaced by python logging modules
connection = psycopg2.connect(
user=os.environ["PGUSER"],
password=os.environ["PGPASSWORD"],
host=os.environ["PGHOST"],
port=os.environ["PGPORT"],
dbname='postgres'
)
connection.set_session(autocommit=True)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_catalog.pg_database WHERE datname = '{os.environ['PGDATABASE']}'")
exists = cursor.fetchone()
if not exists:
cursor.execute(f"CREATE DATABASE {os.environ['PGDATABASE']}")
print(f"successfully created database : {os.environ['PGDATABASE']}")
else:
print(f"database: {os.environ['PGDATABASE']} already exists: skipping...")
except Exception as e:
print(e)
finally:
if connection:
connection.close()
if __name__ == "__main__":
recreate_db()