I've read the documentation on database routers in Django 2.2. I more or less understand the concept - including keeping certain tables together - except that it seems somewhat tricky to do in practice.
My instincts when things are complicated and interdependent is to use unit tests to gradually get my code to return expected results. Except that I don't know how to write tests in this case.
Databases in settings:
DATABASES = {
"default": {
"ENGINE": constants.POSTGRES_ENGINE,
"NAME": constants.SYSDBNAME,
...
},
"userdb": {
"ENGINE": constants.POSTGRES_ENGINE,
"NAME": constants.USERDBNAME,
}
Routers in settings:
DATABASE_ROUTERS = [
#the user database - userdb - which gets the auths and user-related stuff
"bme.websec.database_routers.UserdbRouter",
#the default database - sysdb - gets most of the other models
"bme.websec.database_routers.SysdbMigrateRouter",
]
Ideally I would use unit tests to submit all my models one by one to the routers' allow_migrate
, db_for_read
, db_for_write
methods, and for each call, verify that I got the expected results.
But is there a way to do that? I suppose I can use
models = django.apps.apps.get_models(
include_auto_created=True, include_swapped=True
)
and then drive those methods calls from unittests.
But to take just
def allow_migrate(self, db, app_label, model_name=None, **hints):
How do I know when to have
**hints
and whethermodel_name
is always provided or not?And most importantly, how do I simulate what the master router ultimately decides as stated in the doc (my emphasis)? Among other things, it doesn't rely on just one router, it calls both in succession and then "does things" if my custom routers return None, so unit testing my routers by calling them individually doesn't really replicate the master router's behavior.
The master router is used by Django’s database operations to allocate database usage. Whenever a query needs to know which database to use, it calls the master router, providing a model and a hint (if available). Django then tries each router in turn until a database suggestion can be found. If no suggestion can be found, it tries the current _state.db of the hint instance. If a hint instance wasn’t provided, or the instance doesn’t currently have database state, the master router will allocate the default database.
I've gotten db_for_read
and db_for_write
mostly to behave, but I am struggling to get migrations working correctly : most models end up in userdb
rather than default
.
So far, what I am doing is running migrations against 2 empty databases and using postgresql to check where tables were created. Drop databases, adjust routers, re-run. Is there a better way to unit test the actual decision-making of which database gets what model tables for migrations (write and read are nice-to-have, not the primary reason for this question)?