-1

I would like to implement continuous delivery on every commit of source code to master branch in git. But most of the times, code alone is not changed, along with code release, there would be DB changes and also Config changes or new configs added.

So my question is how this is handled in continuous delivery when there is code + DB + config changes.

Let us take a example where I have 3 repos

Repo A - source code - to check in new features or bug fixes. Auto deploy Git hook is set on this repo

Repo B - DB changes

Repo C - config changes

Now, a developer has some code changes which has config changes and also DB changes. So if the developer checks in the source code first (which will trigger a build and deploy), but takes some time to check in DB changes and config changes, the upstream environment will have the latest code with old DB or old config. This is inconsistent and might result in unwanted results.

I could think of 2 solutions, to avoid the issue:

1) Developers should be trained to first checkin DB / config changes and then check in source code.

OR

2) Have 1 more repo - called app-releases which is yaml file to take the app version, DB changes metadata (like script file name etc) and config change label or tag version. Have a auto deploy on this repo branch. So the developer can checkin as they want and finally checks in app-release file which would trigger the build.

Any other suggestions please let me know?

  • What do you mean, "how is this handled"? If things are changed, they're deployed. – Daniel Mann Jun 01 '18 at 15:31
  • @DanielMann - but source code alone will be deployed right, even before the DB changes. That could cause an issue. – Prabhu shanmughapriyan Jun 01 '18 at 15:36
  • It will do that if you design your deployment pipeline to do it that way. You're not really asking a question that can be answered in a Q&A format. – Daniel Mann Jun 01 '18 at 15:38
  • @DanielMann - let me rephrase my question. Thanks – Prabhu shanmughapriyan Jun 01 '18 at 15:42
  • @DanielMann - how about now? – Prabhu shanmughapriyan Jun 01 '18 at 16:14
  • It's still way too broad. The answer is "have a build process that creates a deployable package of everything that needs to be deployed, then have a release process that deploys the build artifact through a pipeline of environments." However, that answer is more or less universal, and is not a solution to a specific problem. Stack Overflow is intended for specific questions on implementation, not broad questions on general practices. This may be a better fit for the DevOps Stack Exchange site. – Daniel Mann Jun 01 '18 at 16:17

1 Answers1

-1

I am telling from the point of view of MS SQL Server. If you are using database project coming with Visual Studio SQL Server Data tools add-in, your schema changes are automatically taken care during deployment by the DACPAC deployment. You can do the database DACPAC deployment using Powershell or SqlPackage.exe. These tools do a comparison of current schema against target database and generate the syncing SQL scripts and execute them.

The configuration information for the DACPAC deployment are stored in Profile xmls and profile xmls are used during the DACPAC deployment. There will be separate profile xmls for every environment : DEV, INT, UAT, PROD. The schema changes will be accordingly deployed to the corresponding environment based on profile xmls.

Venkataraman R
  • 12,181
  • 2
  • 31
  • 58