I'm writing an API that consists of several microservices. I have the code in a private Gitlab repo. I have a custom CI/CD pipeline configured to run a couple of different steps automatically on every commit to master (e.g. build, test, deploy to a dev environment). Deploying to prod is manual.
I have written some unit tests around this code, which naturally test only small units of the code. These, of course, are run with every commit, because if they fail, that means something in the code has broken.
I also have regression tests which we run after deploying. One of these is actually a bash script that uses curl to hit my production endpoint with certain parameters and checks to make sure that I'm getting 200 responses. I have parameterized this script so I can easily point it at my dev environment (instead of prod).
I use this regression test (and others like it) to check that my already-deployed service is functioning properly. And I run it right after deploying as a final, double-check to confirm that everything is working. But I want to automate that.
My question is where does this fit in a CI/CD workflow? It wouldn't make sense to run this kind of regression test on a commit, because that commit is not necessarily coupled with a deploy. And because there are any number of reasons why the service might be down that are unrelated to whatever code changes went into the most recent commit. In other words, the pipeline should not fail because of external circumstances.
Are there any best practices for running and automating regressions tests?