1

I am working on using continuous deployment for my service which generates XML files as output. To achieve this, we are planning to add Regression Tests to our deployment flow, where we compare the XML file generated with this code change v/s the one without this code change.

But since some code changes might lead to differences between the output, leading to the test failure.

One approach could be to allow the tests to fail and generate a Diff report which would then be manually approved.

How are such cases handled generally in continuous deployment?

1 Answers1

0

You could use something like this xmldiff tool, which creates human-readable diffs between XML files. If a code change was made that causes a test failure, the diff report would already be generated for you.

I've used similar utilities for screenshot comparison, and although they still require manual review in the end when there are unexpected changes, it speeds up the process quite a bit.

Lorn
  • 194
  • 1
  • 5