-1

We have automated integration tests and we are using VSTest to execute them in our pipeline in Azure DevOps. We have also features/scenarios stored in Azure DevOps. Is there any way how to create linkage between automated test and feature/scenario in order to track test coverage?

Olin
  • 1

1 Answers1

0

There are fields in the TestCase for Associated Automation, however as you've probably discovered these fields are read-only.

associated automation

The primary mechanism for populating values for these fields is accomplished using Visual Studio Professional or above and is limited to certain types of tests (NUnit, XUnit, MSTest, Coded-UI). There is a context-menu in Visual Studio on the test that creates this mapping:

associate test case

The goal of these fields is to use the built-in capability of Azure DevOps VSTest@2 task, but instead of running the tests against the compiled dlls you configure the task to execute the Test Plan and Configuration. The built-in test runner pulls the list of test cases, identifies the ones that have associated automation, locates the associated dlls in the build, executes the specific automated tests, and then publishes the results against the Test Plan/Config by mapping the outcome of the automation to the Test Plan's Test-Run execution. It's a fairly complex setup (described here) that is geared towards writing manual-test cases first and automating them later. It's also geared towards self-serve execution of the automation by Testers without needing to know the underlying automation implementation. It even can scale the test execution across multiple build agents. When configured correctly it is pure awesome.

The biggest challenge here is this is not how most teams write their automation -- most of my teams are using Java or JavaScript tests. This feature has been around for a very long time, dating back to a very different era at Microsoft where the focus was "Microsoft tooling first", eg you need Visual Studio to edit Azure DevOps work-items. There's been very little development from Microsoft to make this more technology-agnostic; mostly just updating the tasks to work with the latest version of Visual Studio.

This doesn't mean Microsoft has been slacking off. The problem for java and javascript based tests is the test name that appears in the JUnit test results is often up to the end-user implementation, so it's difficult for Microsoft to reliably predict and configure these associated automation fields. It's obviously far easier for them to support what they can control.

This being said, there are REST APIs for updating work-items, test-cases and test-run-executions. You could in theory define a standard naming convention for your tests, parse the results and push the outcome to the test-run execution.

bryanbcook
  • 16,210
  • 2
  • 40
  • 69