In case Barry's excellent answer still isn't clear, here's my version, hope it helps.
The biggest question is why isn't the user-defined conversion to optional<int>
preferred in direct initialization:
std::optional<int> my_opt(my_foo);
After all, there is a constructor optional<int>(optional<int>&&)
and a user-defined conversion of my_foo
to optional<int>
.
The reason is the template<typename U> optional(U&&)
constructor template, which is supposed to activate when T
(int
) is constructible from U
and U
is neither std::in_place_t
nor optional<T>
, and direct-initialize T
from it. And so it does, stamping out optional(foo&)
.
The final generated optional<int>
looks something like:
class optional<int> {
. . .
int value_;
. . .
optional(optional&& rhs);
optional(foo& rhs) : value_(rhs) {}
. . .
optional(optional&&)
requires a user-defined conversion whereas optional(foo&)
is an exact match for my_foo
. So it wins, and direct-initializes int
from my_foo
. Only at this point is operator int()
selected as a better match to initialize an int
. The result thus becomes 2
.
2) In case of my_opt = static_cast<std::optional<int>>(my_foo)
, although it sounds like "initialize my_opt
as-if it was std::optional<int>
", it actually means "create a temporary std::optional<int>
from my_foo
and move-assign from that" as described in [expr.static.cast]/4:
If T
is a reference type, the effect is the same as performing the
declaration and initialization
T t(e);
for some invented temporary
variable t
([dcl.init]) and then using the temporary variable as the
result of the conversion. Otherwise, the result object is
direct-initialized from e
.
So it becomes:
my_opt = std::optional<int>(my_foo);
And we're back to the previous situation; my_opt
is subsequently initialized from a temporary optional
, already holding a 2
.
The issue of overloading on forwarding references is well-known. Scott Myers in his book Effective Modern C++ in Chapter 26 talks extensively about why it is a bad idea to overload on "universal references". Such templates will tirelessly stamp out whatever the type you throw at them, which will overshadow everything and anything that is not an exact match. So I'm surprised the committee chose this route.
As to the reason why it is like this, in the proposal N3793 and in the standard until Nov 15, 2016 it was indeed
optional(const T& v);
optional(T&& v);
But then as part of LWG defect 2451 it got changed to
template <class U = T> optional(U&& v);
With the following rationale:
Code such as the following is currently ill-formed (thanks to STL for
the compelling example):
optional<string> opt_str = "meow";
This is because it would require two user-defined conversions (from
const char*
to string
, and from string
to optional<string>
) where the
language permits only one. This is likely to be a surprise and an
inconvenience for users.
optional<T>
should be implicitly convertible from any U
that is
implicitly convertible to T
. This can be implemented as a non-explicit
constructor template optional(U&&)
, which is enabled via SFINAE only
if is_convertible_v<U, T>
and is_constructible_v<T, U>
, plus any
additional conditions needed to avoid ambiguity with other
constructors...
In the end I think it's OK that T
is ranked higher than optional<T>
, after all it's a rather unusual choice between something that may have a value and the value.
Performance-wise it is also beneficial to initialize from T
rather than from another optional<T>
. An optional
is typically implemented as:
template<typename T>
struct optional {
union
{
char dummy;
T value;
};
bool has_value;
};
So initializing it from optional<T>&
would look something like
optional<T>::optional(const optional<T>& rhs) {
has_value = rhs.has_value;
if (has_value) {
value = rhs.value;
}
}
Whereas initializing from T&
would require less steps:
optional<T>::optional(const T& t) {
value = t;
has_value = true;
}