The absence of a way to skip a test in CATCH, Google Test and other frameworks (at least in the traditional sense, where you specify the reason for doing so and see it in the output) made me think if I need it at all (I've been using UnitTest++ in my past projects).
Normally, yeah, there shouldn't be any reason to skip anything in a desktop app - you either test it or not. But when it comes to hardware - some things can't be guaranteed.
For example, I have two devices: one comes with an embedded beeper, but the other - without. In UnitTest++ I would query the system, find out that the beeper is not available, and would just skip the tests, which depend on it. In CATCH, of course, I can do something similar: query the system during the initialization, and then just exclude all tests with the tag "beeper" (a special feature in CATCH).
However, there's a slight difference: a tester (someone other than me) would read the output and not find those optional tests mentioned (whereas in UnitTest++ they'd be marked as skipped, and the reason would be provided as a part of the output). His first thoughts:
- This must be some old version of the testing app.
- Maybe I forgot to enable suite X.
- Something is probably broken, I should ask the developer.
- Wait, maybe they were just skipped. But why? I'll ask the developer anyway.
Moreover, he could just NOT notice that those tests were skipped, while they might actually shouldn't be (i.e. the OS returns "false", regardless of the beeper being/not being there, which is a major bug). One option would be to mark "skipped" tests as passed, but that feels like an unnecessary workaround.
Is there some clever technique I'm not aware of (i.e., I don't know, separating the optional tests into a standalone program altogether)? If not - should I stick to UnitTest++ then? It does the job, but I really like CATCH's SECTIONs and tags, helps in avoiding code repetition.