0

I need to test numerical software that runs on both Linux and Windows. The tests involve comparing outputs to known-good outputs, etc., similar to what is described at Numerical regression testing. To clarify, the outputs are not necessarily numeric - they could be categorical predictions of a classifier, or text. On Linux that's a diff; on Windows it's something else, but my goal is to write each test only once. I know that CMake/CTest can be used to generate cross-platform tests, but they seem to be limited to checking for non-zero exit status. Is there software that can choose the right "diff" automatically on each platform? Maybe a CTest module/package that I'm not aware of?

Community
  • 1
  • 1
theTrickster
  • 99
  • 1
  • 8
  • 1
    If you're writing a numerical package, you probably already have a language and library that allows you to do more complicated things than comparing numbers. I suspect it would be the easiest thing to code your regression tests directly in C (or whatever your language) and make the test exit with a status indicating success if and only if the test passed. – 5gon12eder Nov 21 '15 at 02:06
  • Don't use diff. You will never get the same result on different computers. You have to define sane absolute and relative errors ans compare your results accordingly. – usr1234567 Nov 21 '15 at 08:22
  • @usr1234567, I added clarification that the outputs are not necessarily numeric. So for many of the tests, diff is appropriate. – theTrickster Nov 21 '15 at 20:37

1 Answers1

0

CMake and CTest does not provide sophisticated test evaluation. It mainly distinguish passing tests, skipped tests, failing tests (non-zero return value, exception, SIGTERM).

  1. Use FAIL / PASS_REGULAR_EXPRESSION to evaluate the output. You can use regular expressions for both properties. Regular expressions are quite limited; especially to estimate numerical errors regular expressions are useless. One could write a more complicated CMake function that would extend CMake's capabilities. I am not aware of any project doing this.

  2. You can use external tools specialized on unit testing. The result value from this tools is evaluated by CTest. Depending on the language, posible external tools are: googletest, JUnit, Python unittest or even diff.

  3. Write your own output evaluation script. For example Perl or Python are easy to use and have good parsing qualities. You add the script with the actual test as a parameter via add_test. The script runs the test, compares the output and signals the result to CTest.

So you have to install an additional language interpreter like Perl or Python or a third party helper tool. Or stick with writing your evaluation for each platform.

usr1234567
  • 21,601
  • 16
  • 108
  • 128