Finally I came to the following solution.
First I accept the idea to identify default object by isDefault
attribute and wrote some abstract model to deal with it, keeping data integrity as much as possible (code is in bottom of the post).
What I don't like much in accepted solution, is the data migrations are mixed with schema migrations. It's easy to lost them, i.e. during squashing. Occasionally I am also deleting migrations at all, when I am sure all my production and backup databases are in consistence with the code, so I can generate single initial migration and fake it. Keeping data migration together with schema migrations breaks this workflow.
So I decide to keep all data migrations in single file outside of migrations
package. So I create data.py
in my app package and put all data migrations in single function migratedata
, keeping in mind that this function can be called on early stages, when some models still may not exist, so we need to catch LookupError
exception for apps registry access. Than I use this function for every RunPython
operations in data migrations.
So the workflow looks like that (we assume Model
and ModelX
are already in place):
1) Create ModelY
:
class ModelY(Defaultable):
y_name = models.CharField(max_length=255, default='ModelY')
2) Generate migration:
manage.py makemigration
3) Add data migration in data.py
(add name of the model to defaultable
list in my case):
# data.py in myapp
def migratedata(apps, schema_editor):
defaultables = ['ModelX', 'ModelY']
for m in defaultables:
try:
M = apps.get_model('myapp', m)
if not M.objects.filter(isDefault=True).exists():
M.objects.create(isDefault=True)
except LookupError as e:
print '[{} : ignoring]'.format(e)
# owner model, should be after defaults to support squashed migrations over empty database scenario
Model = apps.get_model('myapp', 'Model')
if not Model.objects.all().exists():
Model.objects.create()
4) Edit migration by adding operation RunPython
:
from myapp.data import migratedata
class Migration(migrations.Migration):
...
operations = [
migrations.CreateModel(name='ModelY', ...),
migrations.RunPython(migratedata, reverse_code=migratedata),
]
5) Add ForeignKey(ModelY)
to Model
:
class Model(models.Model):
# SET_DEFAULT ensures that there will be no integrity issues, but make sure default object exists
y = models.ForeignKey(ModelY, default=ModelY.default, on_delete=models.SET_DEFAULT)
6) Generate migration again:
manage.py makemigration
7) Migrate:
manage.py migrate
8) Done!
The whole chain can be applied to empty database, it will create final schema and fill it with initial data.
When we sure, that our db is in sync with code we can easily remove long chain of migrations, generate single initial one, add RunPython(migratedata, ...)
to it, and then migrate with --fake-initial
(delete django_migrations
table before).
Huh, so so tricky solution for such simple task!
Finally there is Defaultable
model source code:
class Defaultable(models.Model):
class Meta:
abstract = True
isDefault = models.BooleanField(default=False)
@classmethod
def default(cls):
# type: (Type[Defaultable]) -> Defaultable
"""
Search for default object in given model.
Returning None is useful when applying sqashed migrations on empty database,
the ForeignKey with this default can still be non-nullable, as return value
is not used during migration if there is no model instance (Django is not pushing
returned default to the SQL level).
Take a note on only(), this is kind of dirty hack to avoide problems during
model evolution, as default() can be called in migrations within some
historical project state, so ideally we should use model from this historical
apps registry, but we have no access to it globally.
:return: Default object id, or None if no or many.
"""
try:
return cls.objects.only('id', 'isDefault').get(isDefault=True).id
except cls.DoesNotExist:
return None
# take care of data integrity
def save(self, *args, **kwargs):
super(Defaultable, self).save(*args, **kwargs)
if self.isDefault: # Ensure only one default, so make all others non default
self.__class__.objects.filter(~Q(id=self.id), isDefault=True).update(isDefault=False)
else: # Ensure at least one default exists
if not self.__class__.objects.filter(isDefault=True).exists():
self.__class__.objects.filter(id=self.id).update(isDefault=True)
def __init__(self, *args, **kwargs):
super(Defaultable, self).__init__(*args, **kwargs)
# noinspection PyShadowingNames,PyUnusedLocal
def pre_delete_defaultable(instance, **kwargs):
if instance.isDefault:
raise IntegrityError, "Can not delete default object {}".format(instance.__class__.__name__)
pre_delete.connect(pre_delete_defaultable, self.__class__, weak=False, dispatch_uid=self._meta.db_table)