1

I’ve been tasked with investigating whether something can be added to our MR approval process to highlight whether proposed Merge Requests (MRs) in GitLab, which are C++ based code, contain any new unit tests or modifications to existing tests. The overall aim is to remind developers and approvers that they need to consider unit testing.

Ideally a small script would run and detect the presence of additional tests or changes (this I can easily write, and I accept that there’s a limit to how much can be done here) and display a warning on the MR if they weren’t detected.

An addition step, if at all possible, would be to block the MR until either, further commits were pushed that meet the criteria, or an (extra/custom) GitLab MR field is completed explaining why unit testing is not appropriate for this change. This field would be held with the MR for audit purposes. I accept that this is not foolproof but am hoping to pilot this as part of a bigger push for more unit test coverage.

As mentioned, I can easily write a script in, say, Python to check for unit tests in the commit(s), but what I don’t know is whether/how I can hook this into the GitLab MR process (I looked at web-hooks but they seem to focus on notifying other systems rather than being transactional) and whether GitLab is extensible enough for us to achieve the additional step above. Any thoughts? Can this be done, and if so, how would I go about it?

torek
  • 448,244
  • 59
  • 642
  • 775
Component 10
  • 10,247
  • 7
  • 47
  • 64

2 Answers2

2

measuring the lack of unit tests

detect the presence of additional tests or changes

i think you are looking for the wrong thing here. the fact that tests have changed or that there are any additional tests does not mean, that the MR contains any unit tests for the submitted code.

the underlying problem is of course a hard one.

a good approximation of what you want is typically to check how many lines of code are covered by the test-suite.

if the testsuite tests more LOCs after the MR than before, then the developer has done their homework and the testsuite has improved. if the coverage has grown smaller, than there is a problem.

of course it's still possible for a user to submit unit tests that are totally unrelated to their code changes, but at least the overall coverage has improved (or: if you already have a 100% coverage before the MR, then any MR that keeps the coverage at 100% and adds new code has obviously added unit tests for the new code).

finally, to come to your question

yes, it's possible to configure a gitlab-project to report the test-coverage change introduced by an MR.

https://docs.gitlab.com/ee/ci/pipelines/settings.html#test-coverage-parsing

you obviously need to create a coverage report from your unittest run. how you do this depends on the unittesting framework you are using, but the gitlab documentation gives some hints.

umläute
  • 28,885
  • 9
  • 68
  • 122
  • 1
    Thanks, but that's not really what I'm after. Gating the MR using a coverage report doesn't work well here mainly because of the time it takes in the pipeline to report it; it has been tried. To be clear we are using coverage reports as well, but not for gating MRs. I'm aware of the shortcomings of what I'm suggesting. It's not intended to be watertight but mainly to focus developers/approvers minds on the responsibility to cover their code with unit tests. – Component 10 Jun 25 '21 at 09:18
0

You don't need a web hook or anything like that. This should be something you can more or less trivially solve with just an extra job in your .gitlab-ci.yml. Run your python script and have it exit nonzero if there are no new tests, ideally with an error message indicating that new tests are required. Now when MRs are posted your job will run and if there are no new tests the pipeline will fail.

If you want the pipeline to fail very fast, you can put this new job at the head of the pipeline so that nothing else runs if this one fails.

You will probably want to make it conditional so that it only runs as part of an MR, otherwise you might get false failures (e.g. if just running the pipeline against some arbitrary commit on a branch).

bstpierre
  • 30,042
  • 15
  • 70
  • 103