0

For a gradle based side track project I moved from junit4 to junit5(jupiter) which allows much more flexibility when writing parameterized tests. That's great.

There is however one annoying details which makes debugging tests cumbersome: unlike the Intellij IDEA's test runner the gradle test runner visualized all test case results as a list, but the list's entries (so the test cases) are referenced by the numeric key of the data set entries in the data source for the parameterized test. So not by the actual test data as the IDEA's test runner does. Which does not really help to understand which of many test cases fails.

I understand that this is an issue that I face because of deligating the test runs to gradle. Things are fine when using the IDEA's own test runner. I hesitate to use that one, however: the reason why I use gradle is that I use an external build pipeline. And using two different test runners smells like having to deal with different test outcomes...

So my question is: how can one get the gradle test runner to use the actual test data as a reference for the test cases? Similar to what the IDEA's test runner does? I tried using a map, but jupiter explains that it fails to stream that.... The only work around I found is to output some data set identifier to StdOut, but that is buried in the rest of the output. Anyone can tell me how to achieve that in a more elegant way? Ideally as suggested in the example below?

An example:

@ParameterizedTest
@MethodSource("allFactoredClasses")
public void clearInstances_shouldClearInstances(Class<? extends Factored> factoredClass) {
    // ...
}

static private Set<Class<? extends Factored>> allFactoredClasses() {
    Reflections reflections = new Reflections("com.example.project");
    return reflections.getSubTypesOf(Factored.class);
}

This is the actual visualization (typed off so that I do not have to post an image):

TestResults
  com.example.project.factory.FactoredTest
    v clearInstances_shouldClearInstances(Class)[1]
    v clearInstances_shouldClearInstances(Class)[2]
    v clearInstances_shouldClearInstances(Class)[3]
    x clearInstances_shouldClearInstances(Class)[4]
    v clearInstances_shouldClearInstances(Class)[5]
    v clearInstances_shouldClearInstances(Class)[6]
    v clearInstances_shouldClearInstances(Class)[7]

This is the desired visualization:

TestResults
  com.example.project.factory.FactoredTest
    v clearInstances_shouldClearInstances(Class)[Controller]
    v clearInstances_shouldClearInstances(Class)[Reader]
    v clearInstances_shouldClearInstances(Class)[Parser]
    x clearInstances_shouldClearInstances(Class)[Writer]
    v clearInstances_shouldClearInstances(Class)[Logger]
    v clearInstances_shouldClearInstances(Class)[Filter]
    v clearInstances_shouldClearInstances(Class)[Command]

This would make it much easier to immediately see that the test case for the "Writer" data set (Writer.class) has failed...

arkascha
  • 41,620
  • 7
  • 58
  • 90
  • The IDE's test runner displays the arguments. I think what you're seeing is a limitation of Gradle, when letting IntelliJ delegate the tests to Gradle. If possible, try running the tests with the IntelliJ runner. – JB Nizet Dec 18 '19 at 22:56
  • @JBNizet Indeed, this is a gradle based project. I started without, but since I have to use external pipelines I was forced to hand over control to gradle. Used the IDE's test runner means that tests are executed different in the pipeline, not sure if that is a good idea... Anyway I tried that, since indeed I did remember having seen such thing, but nothing changed. I changed the settings for "Build and run using" and "Run tests using" from "Gradle" to "Intellij IDEA" in the settings. Anything else I have to consider here? – arkascha Dec 18 '19 at 23:06
  • Make sure to delete the previous run configuration before re-running the test, otherwise IntelliJ just relaunches it. – JB Nizet Dec 18 '19 at 23:08
  • @JBNizet Ah, that's it, thanks! I only invalidated the caches and restarted the IDE before, but forgot to delete the run configurations. Thanks for the hint! Though I am still not sure if it is a good idea to use two different test runners... – arkascha Dec 18 '19 at 23:13
  • @JBNizet I revised the question pointing out the difference between the two test runners. Thanks again. – arkascha Dec 18 '19 at 23:35

0 Answers0