For a gradle based side track project I moved from junit4 to junit5(jupiter) which allows much more flexibility when writing parameterized tests. That's great.
There is however one annoying details which makes debugging tests cumbersome: unlike the Intellij IDEA's test runner the gradle test runner visualized all test case results as a list, but the list's entries (so the test cases) are referenced by the numeric key of the data set entries in the data source for the parameterized test. So not by the actual test data as the IDEA's test runner does. Which does not really help to understand which of many test cases fails.
I understand that this is an issue that I face because of deligating the test runs to gradle. Things are fine when using the IDEA's own test runner. I hesitate to use that one, however: the reason why I use gradle is that I use an external build pipeline. And using two different test runners smells like having to deal with different test outcomes...
So my question is: how can one get the gradle test runner to use the actual test data as a reference for the test cases? Similar to what the IDEA's test runner does? I tried using a map, but jupiter explains that it fails to stream that.... The only work around I found is to output some data set identifier to StdOut, but that is buried in the rest of the output. Anyone can tell me how to achieve that in a more elegant way? Ideally as suggested in the example below?
An example:
@ParameterizedTest
@MethodSource("allFactoredClasses")
public void clearInstances_shouldClearInstances(Class<? extends Factored> factoredClass) {
// ...
}
static private Set<Class<? extends Factored>> allFactoredClasses() {
Reflections reflections = new Reflections("com.example.project");
return reflections.getSubTypesOf(Factored.class);
}
This is the actual visualization (typed off so that I do not have to post an image):
TestResults
com.example.project.factory.FactoredTest
v clearInstances_shouldClearInstances(Class)[1]
v clearInstances_shouldClearInstances(Class)[2]
v clearInstances_shouldClearInstances(Class)[3]
x clearInstances_shouldClearInstances(Class)[4]
v clearInstances_shouldClearInstances(Class)[5]
v clearInstances_shouldClearInstances(Class)[6]
v clearInstances_shouldClearInstances(Class)[7]
This is the desired visualization:
TestResults
com.example.project.factory.FactoredTest
v clearInstances_shouldClearInstances(Class)[Controller]
v clearInstances_shouldClearInstances(Class)[Reader]
v clearInstances_shouldClearInstances(Class)[Parser]
x clearInstances_shouldClearInstances(Class)[Writer]
v clearInstances_shouldClearInstances(Class)[Logger]
v clearInstances_shouldClearInstances(Class)[Filter]
v clearInstances_shouldClearInstances(Class)[Command]
This would make it much easier to immediately see that the test case for the "Writer" data set (Writer.class) has failed...