First and foremost I'd try to get a handle on the migration process if things are truly going too fast. Schemas do change but if you're spending a significant amount of time just keeping up with that, it's worth taking a step back to try to stabilize things.
The next thing to do is to minimize the impact of schema changes: if you have a series of monolithic fixtures which set up entire data-states in one go, break them apart and use a compositional approach. For example, if you have a users
table, there should be only one fixture that inserts new users. Other fixtures which depend on user data should rely on the user-fixture to take care of that before inserting what they need. Keep the single responsibility principle in mind and compose the data-state for a given test by invoking a chain of discrete, well-defined fixtures. This way, if a column is added to or removed from users
, you only have to change a single fixture.
Finally, if you really want to automate things, you're getting into generating code. There's one major caveat here which is that it's very difficult to generate test data humans can easily work with while debugging. If you're fine with meaningless gibberish in all your text fields, then it's not a dealbreaker, but it is something to be aware of.
Relational databases expose the structure and metadata in the information schema so you can query it exactly like you do your own tables. You'll need to consult the documentation for your specific database vendor since there are differences among them, but you can for example list the tables in your database with the query SELECT * FROM information_schema.tables
. Likewise for information_schema.columns
; other tables describe primary and foreign keys and constraints; and so forth. Use that information to build queries which insert well-formed data, and you're off to the races. This is almost certainly more trouble than it's worth.