4

Some time ago, at work, we had to change our main system to be "cross-rdbms". I'm not sure if this is the correct term, but basically the system worked only with MSSQLServer but in order to acomodate a new client we had to make it possible for the system to work with both MSSQLServer and Oracle.

We don't use a ORM because of reasons. Instead, we use a custom ADO-based data access layer.

Before of this change, we rellied heavily on stored procedures, database functions, triggers, etc. A substantial amount of business logic was located on the database itself.

We decided to get rid of all stored procedures, triggers and stuff, and basically reduce to database to a mere storage layer.

To handle migrations, we created a .json file which contains a representation of our database schema: tables, columns, indexes, constraints, etc. A simple application was created to edit this file. By using it, we're able to edit existent tables and columns and add new ones.

This json file is versioned in our repository. When the application is executed, a routine reads the file, constructing a representation of the database in memory. It then reads the metadata from the actual database, compare it to the in-memory representation and generates scripts based on the differences found.

Finally, the scripts are executed, updating the database schema.

So, now comes my real problem. When a new column is added, the developer needs to: - add a new property to the POCO class that represents a row in that table; - edit the method which maps the table columns to the class properties, adding the new column/property mapping; - edit the class which handles database commands, creating a new parameter referent to the new column.

When this approach was initially implemented, I thought about auto-generating and updating the POCO classes based on changes in the json file. These would keep the classes in sync with the database schema, and we wouldn't have to worry about developers forgetting to update the classes after creating new columns.

This feature wasn't implemented tough, and now I'm having serious doubts about it, mostly because I've been studying Clean Architecture/Onion Architecture and Domain Driven Design.

From a DDD perspective, everything should be about the Domain, which in turn should be tottally ignorant about its persistence.

So, my question is basically: how can I maintain my domain model and my database schema in sync, without violating DRY and without using a "database-centric" approach?

EduWayne
  • 41
  • 2
  • 2
    Is your application CRUD? If it is I wouldn't even bother to pack data into objects. Just use a resultset as an abstraction for all data and be done with the unnecessary complexity and the OO-rdbms impedence mistmatch. You can still organize your application into logical layers and implement the business logic as services. However, if you are designing a behavior-rich domain model then it is the model that should drive the DB, not the opposite. Actually, it's better if both are decoupled by using an ORM that supports powerful mappings techniques. – plalx Sep 02 '16 at 13:30

1 Answers1

2

DDD puts the focus on the domain language and its representation in domain classes. DB issues are not the primary concern of DDD.

Therefore, generating domain classes from the database schema is the wrong direction if the intention is to apply DDD.

This question is more about finding a decent way to manage DB upgrades, which has little to do with DDD. Unit/integration tests for basic read/write DB operations may go a long way in assisting developers to remember editing the required files when DB columns are altered.

Markus Pscheidt
  • 6,853
  • 5
  • 55
  • 76