1

If I compile the following code:

#include <boost/range/irange.hpp>

template<class Integer>
auto iota(Integer last)
{
    return boost::irange(0, last);    
}

template<class Integer, class StepSize>
auto iota(Integer last, StepSize step_size)
{
    return boost::irange(0, last, step_size);    
}

int main(  )
{
    int y = 0;
    for (auto x : iota(5))
        y += x;
    return y;
}

With both clang 3.9.0 and GCC 6.3.0 at -O1, I get complete optimization by GCC (simply returns the final result) and lots and lots of clang output code. See this experiment on GodBolt. If I switch clang to -O2, however, it also compiles everything away.

What optimization differences between the two compilers' -O1 modes causes this to happen?

einpoklum
  • 118,144
  • 57
  • 340
  • 684
  • `-O1` is basically just the "make my debug code fast" optimization level. You use it when you want semi-optimized code to be produced nearly as fast as unoptimized code could be generated. You don't use it when you actually want optimized code. So I don't really know why this would matter. And as far as I can tell, the Clang front-end doesn't allow the same fine-grained control over optimization passes that GCC does (you can only control this with `opt`), so even once you figure out this information, I don't see how it would be of much practical use. – Cody Gray - on strike Jan 27 '17 at 19:09
  • 1
    @CodyGray: Umm, `-Og` is "make my debug code fast" IIANM. Also, I just want to know. – einpoklum Jan 27 '17 at 19:58
  • `-Og` is the same as `-O1` in Clang, and support for it was only added recently. It's in the trunk version, but I don't think it's supposed to land officially until v4.0. It's not there in v3.9.1, which is the latest I have available. Even in GCC, `-Og` is just `-O1` with a few options that are designed to make debugging easier, like not stripping symbols. – Cody Gray - on strike Jan 27 '17 at 20:02

0 Answers0