I find the workflow for working with database migrations in a containerized environment confusing. I have a web API with an attached database. The API runs in one container and the database in another. The project file structure is as follows
.
├── docker-compose.yml
├── Dockerfile
└── app
| ├── __init__.py
| ├── database
| | ├── alembic/
| | ├── __init__.py
| | ├── db.py
| | └── models.py
| ├── other
| ├── source
| └── files
├── other
└── files
In order for the API container to be able to access the database the sqlalchemy.url in the ini-file is set to:
postgresql://{username}:{password}@db:5432/{database}
However when I want to do a migration, for example add a table column, I will change the model in app/database/models.py
change directory to app/database
and run alembic revision --autogenerate -m "Description"
. This is where the problem occurs, I get the error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
If I change the hostname to localhost it works but then the docker-compose breaks since it has to reference the container name.
This workflow does not seem right. How do people work with databases in projects which uses containers?
The docker-compose.yml
file looks like this:
version: "3"
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
app:
build: .
command: bash -c "cd app/database && alembic upgrade head && cd ../.. && python app/main.py"
volumes:
- .:/code
ports:
- "5000:5000"
depends_on:
- db