0

We have C++ project that has a relatively big number of test suites implemented in Boost/Test. All tests are kept out of main project's tree, every test suite is located in separate .cpp file. So, our current CMakeLists.txt for tests looks like this:

cmake_minimum_required(VERSION 2.6)

project(TEST_PROJECT)
find_package(Boost COMPONENTS unit_test_framework REQUIRED)

set(SPEC_SOURCES
    main.cpp
    spec_foo.cpp
    spec_bar.cpp
    ...
)

set(MAIN_PATH some/path/to/our/main/tree)    
set(MAIN_SOURCES
    ${MAIN_PATH}/foo.cpp
    ${MAIN_PATH}/bar.cpp
    ...
)

add_executable (test_project
    ${SPEC_SOURCES}
    ${MAIN_SOURCES}
)

target_link_libraries(test_project
    ${Boost_UNIT_TEST_FRAMEWORK_LIBRARY}
)

add_test(test_project test_project)

enable_testing()

It works ok, but the problem is SPEC_SOURCES and MAIN_SOURCES are fairly long lists and someone occasionally breaks something in either one of the files in main tree or spec sources. This, in turn, makes it impossible to build target executable and test the rest. One has to manually figure out what was broken, go into CMakeLists.txt and comment out parts that fail to compile.

So, the question: is there a way to ignore tests that fail to build automatically in CMake, compile, link and run the rest (ideally, marking up ones that failed as "failed to build")?

Remotely related question Best practice using boost test and tests that should not compile suggests to try_compile command in CMake. However, in its bare form it justs executes new ad hoc generated CMakeList (which will fail just as the original one) and doesn't have any hooks to remove uncompilable units.

Community
  • 1
  • 1

2 Answers2

1

I think you have some issues in your testing approach.

One has to manually figure out what was broken, go into CMakeLists.txt and comment out parts that fail to compile.

If you have good coverage by unit-tests you should be able to identify and locate problems really quickly. Continuous integration (e.g. Jenkins, Buildbot, Travis (GitHub)) can be very helpful. They will run your tests even if some developers have not done so before committing.

Also you assume that a non-compiling class (and its test) would just have to be removed from the build. But what about transitive dependencies, where a non-compiling class breaks compilation of other classes or leads to linking errors. What about tests that break the build? All these things happen during development.

I suggest you separate your build into many libraries each having its own test runner. Put together what belongs together (cohesion). Try to minimize dependencies in your compilation also (dependency injection, interfaces, ...). This will allow to keep development going by having compiling libraries and test runners even if some libs do not compile (for some time).

ToniBig
  • 836
  • 6
  • 21
0

I guess you could create one test executable per spec source, (using a foreach() loop) and then do something like:

make spec_foo &&  ./spec_foo

This will only try to build the binary matching the test you want to run

But if your build often fails it may be a sign of some bad design in your production code ...

Dimitri Merejkowsky
  • 1,001
  • 10
  • 12