1

I am writing tests for an interpreter from some programming language in Java using JUnit framework. To this end I've created a large number of test cases most of them containing code snippets in a language under testing. Since these snippets are normally small it is convenient to embed them in the Java code. However, Java doesn't support multiline string literals which makes the code snippets a bit obscure due to escape sequences and the necessity to split longer string literals, for example:

String output = run("let a := 21;\n" +
                    "let b := 21;\n" +
                    "print a + b;");
assertEquals(output, "42");

Ideally I would like something like:

String output = run("""
    let a := 21;
    let b := 21;
    print a + b;
""");
assertEquals(output, "42");

One possible solution is to move the code snippets to the external files and refer each file from corresponding test case. However this adds significant maintenance burden.

Another solution is to use a different JVM language, such as Scala or Jython which support multiline string literals, to write the tests. This will add a new dependency to the project and will require to port existing tests.

Is there any other way to keep the clarity of the test code snippets while not adding too much maintenance?

vitaut
  • 49,672
  • 25
  • 199
  • 336
  • Please edit this question, as it’s not about interpreters but about multiline strings. –  Jun 22 '14 at 09:46

3 Answers3

1

Moving the test cases to a file worked for me in the past, it was an interpreter as well:

  1. created an XML file containg the snippets to be interpreted as well as the expected result. It was a fairly simple XML definition, a list of test elements mainly containing testID, value, expected result, type, and a description.
  2. implemented exactly one JUnit test that read the file and looped through its contents, in case of failure we used the testID and description to log failing tests.

It mainly worked because we had one generic well-defined interface to the interpreter like your run method, so refactoring was still possible. In our case this did not increase maintenance effort, in fact we could easily create new tests by just adding more elements to the XML file.

Maybe this is not the optimal way in which Unit tests should be used, but it worked well for us.

home
  • 12,468
  • 5
  • 46
  • 54
  • Thanks a lot for sharing your experience. I thought about similar approach, but it seems that having just one JUnit test that loads and executes everything will make debugging more difficult. When you have many JUnit test cases you can put a breakpoint in the failing one and easily re-execute it using an IDE. But how do you do it with one test case? – vitaut Oct 09 '11 at 07:20
  • @vitaut: You're correct, that was a bit clumsy. We solved that programmatically by defining 1 or 2 internal methods that got called by the looping method. If I remember correctly, we also had more than one XML file and a switch for each test case inside the XML to enable/disable certain tests - so you could manually reduce test execution to exactly one case or you could just copy the required test values into another test class and manually execute it from your IDE if you needed the debugger. It's 3 or 4 years ago, so the option of using another language was not yet given. – home Oct 09 '11 at 07:42
  • 1
    @vitaut, a little trick you could use is to add a marker character (like an exclamation mark) as first character of the name/title of tests you want to debug and to add a few lines in your loop like `if (name.startsWith("!")) { log.debug("Debugging " + name); }` and place the breakpoint on the log line. That way you can chose which test to debug by adding/removing one `!`. – rsp Oct 09 '11 at 08:45
1

Since you are talking about other JVM languages, have you considered Groovy? You would have to add an external dependency, but only at compile/test time (you don't have to put it in your production package), and it provides multiline strings. And one major advantage in your case : its syntax is backwards compatible with Java (meaning you won't have to rewrite your tests)!

Vivien Barousse
  • 20,555
  • 2
  • 63
  • 64
1

I have done this in the past. I've done something similar to what was suggested by home, I used external file(s) containing the tests and their expected results, but using the @Parameterized test runner.

@RunWith(Parameterized.class)
public class ParameterTest {
    @Parameters
    public static List<Object[]> data() {
        List<Object[]> list = new LinkedList<Object[]>();
        for (File file : new File("/temp").listFiles()) {
            list.add(new Object[]{file.getAbsolutePath(), readFile(file)});
        }

        return list;
    }

    private static String readFile(File file) {
        // read file 
        return "file contents";
    }

    private String filename;
    private String contents;

    public ParameterTest(String filename, String contents) {
        this.filename = filename;
        this.contents = contents;
    }

    @Test
    public void test1() {
        // here we test something
    }

    @Test
    public void test2() {
        // here we test something
    }
}

Here we are running test1() & test2() once for each file in /temp, with the parameters of the filename and the contents of the file. The Test Class is instantiated and called for each item that you add into the list in the method annotated with @Parameters.

Using this test runner, you can rerun a particular file if it fails; most IDEs support rerunning a single failed test. The disadvantage of @Parameterized is that there isn't any way to sensibly identify the tests so that the names appear in the Eclipse JUnit plugin. All you get is 0, 1, 2, etc. But at least you can rerun the failed tests.

As home says, good logging is important to identify the failing tests correctly and to aid debugging especially when running outside the IDE.

Matthew Farwell
  • 60,889
  • 18
  • 128
  • 171