I have a question related to Acceptance Test Driven Development (ATDD). According to the process, I start every feature with an acceptance test (end-to-end test). I commit these tests and they are failing as expected. The problem is that I should somehow distinguish between the acceptance tests that are failing because the feature is not complete and those that are failing because of some regression. What is the best practice for organizing CI process with ATDD?
Asked
Active
Viewed 755 times
4
-
You can tag gherkin fixtures with attributes - you could have an attribute for incomplete tests (if you're using a gherkin-based ATDD framework like SpecFlow or Cucumber) – levelnis Feb 22 '13 at 10:58
-
Don't have personal experience with it, but I was at a softrware company who showed me how they developed software. They wrote the acceptance tests in Selenium, and they used sprint IDs in their names as a convention. That way they could easily see if the failed tests were new or old. Is that what you mean? (ps: good question, surprised that there is so little response) – bas Feb 23 '13 at 20:37
1 Answers
3
The tests that are not implemented yet should not be running in CI. The point of CI tests is to catch regressions. Catching "not done yet" problems creates a situation where red builds are "normal" and ignored. This is the worst outcome possible.
There are a lot of ways to do this, and the best will depend on your context. The simplest is to write the acceptance test first, but don't check it in until it passes (ie, you implemented the feature).

tallseth
- 3,635
- 1
- 23
- 24
-
Good answer. As for the not checking in, another option would be to check the tests in, but disable them somehow. Most X-unit test frameworks have some way of telling the testing framework that a test should be ignored. – Petrik Feb 24 '13 at 22:24
-
Don't quite agree with this. Not all acceptance tests can be executted locally to check whether they pass. There is a need for production like environment to run them. This is not always available. Also, several developers might work on the same story and they all won't to see their progress on the story, the progress that can be shown by acceptance tests. – Markus Feb 25 '13 at 06:08
-
If you can't run acceptance tests locally, that is a ***serious problem***! I would work to fix that. – tallseth Feb 25 '13 at 15:40
-
In any case, the general idea still applies: separate "in progress tests" from "regression tests". This can also be achieved by having a separate environment or build that is CI-like, but designed in a way that reports progress on "in work" acceptance tests. – tallseth Feb 25 '13 at 15:42
-
1I know this is an old(ish) question but I thought I'd throw something in. At my previous workplace we had two builds running: One build would execute everything except tests marked 'wip' (work in progress). The other would only execute tests marked wip. Once the sprint/iteration was complete, the wip tags would be removed and those tests would become part of the standard build, ready to catch any regression bugs. As tallseth says, all your tests should be executable locally, but we wanted to run them on a 'production-like' environment as well. Plus it can be a good measure of progress. – thecodefish Nov 12 '13 at 04:11