26

Consider the following:

struct A {
    A(float ) { }
    A(int ) { }
};

int main() {
    A{1.1}; // error: ambiguous
}

This fails to compile with an error about an ambiguous overload of A::A. Both candidates are considered viable, because the requirement is simply:

Second, for F to be a viable function, there shall exist for each argument an implicit conversion sequence (13.3.3.1) that converts that argument to the corresponding parameter of F.

While there is an implicit conversion sequence from double to int, the A(int ) overload isn't actually viable (in the canonical, non-C++-standard sense) - that would involve a narrowing conversion and thus be ill-formed.

Why are narrowing conversions not considered in the process of determining viable candidates? Are there any other situations where an overload is considered ambiguous despite only one candidate being viable?

Barry
  • 286,269
  • 29
  • 621
  • 977
  • 5
    But isn't the conversion from double to float also a narrowing conversion? Can we always define a "least narrowing conversion"? – Caninonos Jul 30 '15 at 17:31
  • @Caninonos Not in this case, no. – Barry Jul 30 '15 at 17:32
  • *Why are narrowing conversions not considered in the process of determining viable candidates?* I think you mean to ask *Why are narrowing conversions not **ignored** in the process of determining viable candidates?* – R Sahu Jul 30 '15 at 17:33
  • Not in this case, yes, but the compiler doesn't care about the values involved to decide whether this is legal or not. It only uses the values' types. (otherwise, let's say that instead, you asked the user to input a double via cin, should the compiler use a float or an unsigned long long version?) – Caninonos Jul 30 '15 at 17:33
  • 2
    Just so I/others understand the premise here: `double` to `float` is not a narrowing conversion but `double` to `int` is? What kind of conversion is `double` to `float`? – David Jul 30 '15 at 17:47
  • 3
    @Dave `double` to `int` is a Floating-integral conversion, that also is narrowing. `double` to `float` is a Floating point conversion, that could be but, isn't necessarily, narrowing. – Barry Jul 30 '15 at 17:51
  • @jaggedSpire While we're at it, if I just did `typedef A double`, I wouldn't even have to do overload resolution! – Barry Jul 30 '15 at 17:51
  • @Barry What if you provide to `A`'s constructor the result of `constexpr double square(42.0)`? And, then another more complicated constexpr function? Of course, the compiler might choose the "good" overload, but won't that be difficult to predict for the programmer? Can you be sure there won't be any inconsistencies between compilers? What if because of that, the compiler silently choose the wrong overload? – Caninonos Jul 30 '15 at 18:09
  • @Caninonos It doesn't matter how complicated the constexpr function is. What matters is that it's a constant expression, within the range of `float`. All the other questions are moot - of course compilers can, and do, have bugs. – Barry Jul 30 '15 at 18:21
  • @Barry My point is, that is certainly possible, but should you wish for it? That would mean that constructor calls with the same arguments' types could possibly use different overloads, that is to say that the programmer would have to pay attention not only to its argument types but also to whether they can be infered at compile-time or not (and what their value will be, which defeats the purpose of generating them by the compiler in the first place). As for compilers bugs, the standard can't suddenly change how overloads are handled, that may break too much code (or worse). – Caninonos Jul 30 '15 at 18:30
  • 1
    @Barry 123456789.0 is a double that can be represented in an int but not a float (it round trips to 123456792). They're both narrowing, and both "not necessarily", at that. Double can represent every float _and_ every int, on a typical system. neither float nor int can represent every possible value of the other type. – Random832 Jul 31 '15 at 04:00
  • And, for the record, 1.1 [double] is almost certainly _not_ representable as a float. On my system `1.1f` converts to a double as `1.100000023841858`. – Random832 Jul 31 '15 at 04:04
  • @Random832 No. According to the standard, `double` --> `int` is *always* narrowing, and `double` --> `float` is narrowing except where the source is a constant expression and the value is "within the range of values that can be represented (even if it cannot be represented exactly)". It's irrelevant whether it's representable in either case. – Barry Jul 31 '15 at 10:36

2 Answers2

15

A problem lies with the fact that narrowing conversions can be detected not based on types.

There are very complex ways to generate values at compile time in C++.

Blocking narrowing conversions is a good thing. Making the overload resolution of C++ even more complex than it already is is a bad thing.

Ignoring narrowing conversion rules when determining overload resolution (which makes overload resolution purely about types), and then erroring out when the selected overload results in a narrowing conversion, keeps overload resolution from being even more complex, and adds in a way to detect and prevent narrowing conversions.

Two examples where only one candidate is viable would be template functions that fail "late", during instantiation, and copy-list initialization (where explicit constructors are considered, but if they are chosen, you get an error). Similarly, having that impact overload resolution would make overload resolution even more complex than it already is.

Now, one might ask, why not fold narrowing conversion purely into the type system?

Making narrowing conversion be purely type-based would be non-viable. Such changes could break huge amounts of "legacy" code that the compiler could prove as being valid. The effort required to sweep a code base is far more worthwhile when most of the errors are actual errors, and not the new compiler version being a jerk.

unsigned char buff[]={0xff, 0x00, 0x1f};

this would fail under a type-based narrowing conversion, as 0xff is of type int, and such code is very common.

Had such code required pointless modification of the int literals to unsigned char literals, odds are the sweep would have ended with us setting a flag to tell the compiler to shut up about the stupid error.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
  • Maybe a better solution then would be to make narrowing conversion type-based? Just have `1.1` --> `float` be a narrowing conversion. – Barry Jul 30 '15 at 18:37
  • 5
    @Barry Then `unsigned char buff[]={0xff, 0x00, 0x1f};` breaks for no good reason (`int` to `unsigned char` would be narrowing, even when the `int` is in range for an `unsigned char`). Breaking existing code that provably had no errors should only be done with great reason. Would narrowing conversion have made it in without that feature? We just did a huge sweep to add the narrowing conversion (and other C++11/14 features) support to a code base: the fact that many of the breaks where actual "oops" made it a lot less tempting to just tell the compiler to shut up. – Yakk - Adam Nevraumont Jul 30 '15 at 18:39
  • You should just throw that comment into the answer. And lol at compiler being a jerk. – Barry Jul 30 '15 at 18:47
  • 3
    *copy-list-initialization* has a similar rule - it considers all constructors in overload resolution and then renders the code ill-formed if the selected constructor is explicit. That's IMO better than copy-initialization's only including non-explicit constructors in the overload set. – T.C. Jul 30 '15 at 19:09
  • @dyp [The narrowing conversion rules don't make a difference there](http://coliru.stacked-crooked.com/a/3fe25131fe65607b), so I'm not sure what you mean? In what situation are the types of the various components of the expression not sufficient to work out which of the overloads are chosen (in a given context, naturally)? – Yakk - Adam Nevraumont Jul 30 '15 at 20:33
  • 1
    Oh, silly me. You're right of course. Even if we drop the ranking difference: http://coliru.stacked-crooked.com/a/036acdb702fd79f7 – dyp Jul 30 '15 at 20:35
8
  • Narrowing is something the compiler only knows about for built-in types. A user defined implicit conversion can't be marked as narrowing or not.

  • Narrowing conversions shouldn't be permitted to be implicit in the first place. (Unfortunately it was required for C compatibility. This has been somewhat corrected with {} initialization prohibiting narrowing for built-in types.)

Given these, it makes sense that the overload rules don't bother to mention this special case. It might be an occasional convenience, but it's not all that valuable. IMO it's better in general to have fewer factors involved in overload resolution and to reject more things as ambiguous, forcing the programmer to resolve such things explicitly.


Also, double to float is a narrowing conversion when the double isn't a constant expression or if the double is too large.

#include <iostream>
#include <iomanip>

int main() {
    double d{1.1};
    float f{d};
    std::cout << std::setprecision(100) << d << " " << f << '\n';
}

This will normally produce an error:

main.cpp:7:13: error: non-constant-expression cannot be narrowed from type 'double' to 'float' in initializer list [-Wc++11-narrowing]
    float f{d};
            ^
bames53
  • 86,085
  • 15
  • 179
  • 244