11

As a followon to the discussion in the comments of this answer, should a TDD test always be made fail first?

Consider the following example. If I am writing an implementation of LinkedHashSet and one test tests that after inserting a duplicate, the original is in the same iteration order as before the insert, I might want to add a separate test that the duplicate is not in the set at all.

The first test will be observed to fail first, and then implemented.

The problem is that it is quite likely that the implementation to make the first test pass used a different set implementation to store the data, so just as a side effect the second test already passes.

I would think that the main purpose of seeing the test fail is to ensure that the test is a good test (many times I've written a test I thought would fail but didn't because the test was written wrong). But if you are confident that the test you write does indeed test something, isn't it valuable to have to ensure that you don't break that behavior later?

Community
  • 1
  • 1
Yishai
  • 90,445
  • 31
  • 189
  • 263
  • I'm admittedly, at best, a dynamic language rookie (hence the comment instead of answer), but my gut feeling is -- yes for dynamic languages, and it depends for static languages. Code that would fail compilation in a static language is going to run in a dynamic language, but cause the test to fail. In order for a static language to run, you have to add some sort of baseline behavior to the code under test (even if just empty methods), which COULD cause a correctly written test to pass first. – Jeremy Frey Jul 03 '09 at 15:41

6 Answers6

10

Of course it's valuable, because then it is a useful regression test. In my opinion, regression tests are more important than testing newly developed code.

To say that they must always fail first is taking a rule beyond practicality.

FogleBird
  • 74,300
  • 25
  • 125
  • 131
7

Yes, TDD tests must fail before they turn green (work). Otherwise you do not know if you have a valid test.

Chris Ballance
  • 33,810
  • 26
  • 104
  • 151
  • I disagree -- take a static language where you have a test like some_value_should_always_be_false(). Writing just enough code to make the test compile is probably going to make the test pass, as well, unless you deliberately sabotage the test & set the value under test to true. – Jeremy Frey Jul 03 '09 at 15:51
  • 2
    So how do you address scenarios like the one presented in the OP? – Yishai Jul 03 '09 at 15:55
  • 1
    @Jeremy Frey - You are correct not to sabotage your tests to make the initial test fail. The goal is not to the language itself, but to test your logic. In the example you mentioned, the test seems too trivial to add value if you can assume the value to always be false. I hope this helps. – Chris Ballance Jul 03 '09 at 16:03
4

TDD for me is more of a design tool, not an afterthought. So there is no other way, the test will fail simply because there is no code to make it pass yet, only after I create it that the test can ever pass.

Otávio Décio
  • 73,752
  • 17
  • 161
  • 228
4

I think the point of "failing first" is to avoid kidding yourself that a test worked. If you have a set of tests checking the same method with different parameters, one (or more) of them is likely to pass from the start. Consider this example:

public String doFoo(int param) {
    //TODO implement me
    return null;
}

The tests would be something like:

public void testDoFoo_matches() {
    assertEquals("Geoff Hurst", createBar().doFoo(1966));
}

public void testDoFoo_validNoMatch() {
    assertEquals("no match", createBar().doFoo(1));
}

public void testDoFoo_outOfRange() {
    assertEquals(null, createBar().doFoo(-1));
}
public void testDoFoo_tryAgain() {
    assertEquals("try again", createBar().doFoo(0));
}

One of those tests will pass, but clearly the others won't, so you have to implement the code properly for the set of tests to pass. I think that is the true requirement. The spirit of the rule is to ensure you have thought about the expected outcome before you start hacking.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
Rich Seller
  • 83,208
  • 23
  • 172
  • 177
  • @John what change did you make? the before and after look the same to me – Rich Seller Jul 03 '09 at 16:01
  • @Rich: he added some parentheses. You can click on the link next to "edited" showing how long ago it was edited (right above his name) and you can see all the differences and the edit note. – Ahmad Mageed Jul 03 '09 at 16:14
  • @Ahmad, that's what I did, as far as I can tell the text with the strikethrough and the new text are the same (copied and pasted the first change below without formatting) createBar().doFoo(1966)createBar().doFoo(1966). What have I missed – Rich Seller Jul 03 '09 at 16:28
  • you missed the trailing ')' on all of the test lines – Nathan Koop Jul 03 '09 at 16:43
  • ah thanks, the extent of the highlight indicated a more extensive change – Rich Seller Jul 03 '09 at 16:53
1

What you're actually asking is how you can test the test to verify that it is a valid one and it tests what you intend.

Making it fail at first is an ok option, but note that even if it fails when you plan it to fail and succeeds after you refactor the code to make it succeed, that still doesn't mean that your test actually tested what you wanted... Of course you can write some other classes which behave differently to test your test... But that's actually a test which tests your original test - How do you know that the new test is valid? :-)

So making a test fail first is a good idea but it still isn't foolproof.

Danra
  • 9,546
  • 5
  • 59
  • 117
0

IMHO, the importance of failing first is to make sure that the test you created doesn't have a flaw. You could, for instance, forget the Assert in your test, and you'd maybe never know that.

A similar case occurs when you're doing boundary tests, you've already built the code that covers it, but it is recommended to test that.

I think that's not a big problem for your test not to fail, but you have to make sure it is indeed testing what it should (debugging, maybe).

Samuel Carrijo
  • 17,449
  • 12
  • 49
  • 59
  • Another technique to make sure it is testing what it should is to temporarily introduce an error into the code under test and see if the tests catch it. This is known as the saboteur method. – Pixelstix Jul 07 '23 at 17:53