If I compile the following code:
#include <boost/range/irange.hpp>
template<class Integer>
auto iota(Integer last)
{
return boost::irange(0, last);
}
template<class Integer, class StepSize>
auto iota(Integer last, StepSize step_size)
{
return boost::irange(0, last, step_size);
}
int main( )
{
int y = 0;
for (auto x : iota(5))
y += x;
return y;
}
With both clang 3.9.0 and GCC 6.3.0 at -O1
, I get complete optimization by GCC (simply returns the final result) and lots and lots of clang output code. See this experiment on GodBolt. If I switch clang to -O2
, however, it also compiles everything away.
What optimization differences between the two compilers' -O1
modes causes this to happen?