0

I used TDD to develop a class that in one part looks like this

class A
{
public:
    // methods, members, stuff

private:
    std::vector<int> vectorA;
    std::vector<int> vectorB;

    bool isAbove(double lowerBound)
    {
        double x = (double)vectorA.size() / vectorB.size();

        return x > lowerBound;
    }

    // more methods, members, stuff
};

All tests pass even if vectorB.size() is actually 0 when isAbove() is called - no exception thrown. It doesn't matter if I build with debug infos or not. However debugging showed that x is -nan(ind) when compared to lowerBound (as you may have expected).

I use VS2015 with VS2013 (v120) toolset. ideone.com produces the same result.

Should I check for vectorB.size() == 0 before calculating x although (by the process of TDD) this isn't needed?

TobiMcNamobi
  • 4,687
  • 3
  • 33
  • 52
  • 2
    I don't see what this has to do with TDD. – Lightness Races in Orbit May 10 '16 at 11:37
  • @LightnessRacesinOrbit In my pre-TDD times I got used to "Whenever you do `a / b`, check that `b != 0` before". Is such a rule still valid if all tests pass? Bonus question: How about de-referencing a pointer that possibly could be `nullptr`? It's nearly the same question I think. – TobiMcNamobi May 10 '16 at 11:42
  • 1
    Floating point division by zero gives undefined behaviour in C++, so any result you get (in your case, based on using IEEE representation from your description) is correct. You really need to test that `vectorB.size()` is non-zero before dividing by it. Alternatively, return `vectorA.size() > lowerBound*vectorB.size()` which gives a (mathematically) equivalent result without any division - the only potential issue with that is overflow in computing `lowerBound*vectorB.size()`. – Peter May 10 '16 at 11:44
  • @TobiMcNamobi: I don't think this has anything to do with TDD. You're basically asking "can I get away without arithmetic sanity checks if they happen not to appear to be necessary on my system today?" – Lightness Races in Orbit May 10 '16 at 11:55
  • @Peter Please write your answers in the _answer section_ so that they can be subject to all the usual peer review mechanisms. Comments are for requesting clarification and for arguing with people. – Lightness Races in Orbit May 10 '16 at 11:56
  • @LightnessRacesinOrbit: TDD assumes that the test system coincides with the specification. This is of course very untrue for C++, where Undefined Behavior even invalidates the laws of logic. – MSalters May 10 '16 at 12:01
  • @MSalters: Making division-by-zero undefined hardly "invalidates the laws of logic". You'll find this principle across almost all branches of mathematics. – Lightness Races in Orbit May 10 '16 at 12:02
  • @Lightness - I did not consider that my comment represents a complete answer. However, I'll post as an answer shortly and see where it goes. – Peter May 10 '16 at 12:02
  • @LightnessRacesinOrbit: My point is that after 1/0, you cannot rely on `true != false` or similar basic logic principles. This is rather annoying for TDD, where it's assumed that the code is correct if the test results match the program's requirements. – MSalters May 10 '16 at 12:10
  • @LightnessRacesinOrbit Rather "[C]an I get away without arithmetic sanity checks if they happen not to appear to be necessary using the process of TDD?" – TobiMcNamobi May 10 '16 at 12:14
  • If all your tests pass when you do a division by zero, it means that your tests are inadequate, not that the sanity check isn't necessary. – molbdnilo May 10 '16 at 12:41
  • @molbdnilo While I agree that sanity checks are valueable, I don't think that unit tests can enforce them. I know that unit tests may enforce that an exception is thrown. But that's not the desired behaviour in this case. – TobiMcNamobi May 10 '16 at 13:00
  • Again, this has nothing to do with TDD. It's a basic facet of any tests, no matter what your development methodology is: writing tests with false positives is always possible. It is up to the test author to write meaningful tests in the given context. Here, the given context is the definition of arithmetic in C++. – Lightness Races in Orbit May 10 '16 at 13:00
  • @LightnessRacesinOrbit With other tests (regression, acceptance, end user) one could argue that the code in question may never have been reached. But it can quite easy be covered with unit tests. Additionally TDD tells you to write code only if a test fails. So, yes there are other contexts with false positives but the question has been asked in the context of TDD. – TobiMcNamobi May 10 '16 at 13:26
  • It has nothing to do with the code potentially never being reached, either. – Lightness Races in Orbit May 10 '16 at 13:33

2 Answers2

1

Floating-point division by zero is well-defined by IEEE 754 (which almost certainly defines floating-point operations on your system), which tells us that the result, in x, is the "inf" value. This will always be "above" lowerBound.

However, strictly speaking, in terms of C++ itself, division by zero has undefined behaviour. If you want to be absolutely, portably safe, and you don't mind the performance penalty, you can avoid it via an if statement. But that's entirely up to you.

[C++14: 5/4]: If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. [ Note: most existing implementations of C++ ignore integer overflows. Treatment of division by zero, forming a remainder using a zero divisor, and all floating point exceptions vary among machines, and is usually adjustable by a library function. —end note ]

[C++14: 5.6/4]: [..] If the second operand of / or % is zero the behavior is undefined. [..]

Community
  • 1
  • 1
Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
1

Floating point division by zero gives undefined behaviour in C++, so any result you get is correct (at least, within standard C++). There is no requirement that a particular value be produced, or that any signal be generated, on division by zero.

The behaviour you describe is consistent with your implementation using IEEE floating point, in which division of a positive by zero will give a positive infinity, division of a negative by zero will give a negative infinity, and division of zero by zero will give a NaN (non-a-number) [this is an over-simplification as IEEE representation allows representation of positive and negative zeros, but you will not get a negative zero by simply converting a std::vector's size to double].

You really need to test that vectorB.size() is non-zero before dividing by it.

Alternatively, return vectorA.size() > lowerBound*vectorB.size() which gives a (mathematically) equivalent result without any division - the only potential issue with that is overflow in computing lowerBound*vectorB.size() (which can be addressed by doing range checking on lowerBound and/or vectorB.size()).

Peter
  • 35,646
  • 4
  • 32
  • 74