1

I've read about those new execution policies in C++17 on cppreference.com: https://en.cppreference.com/w/cpp/algorithm/execution_policy_tag_t

And I was wondering should we prefer those now over range-based loops if we want to allow the compiler to optimize to the best of its possibilities? I'm using gcc which doesn't implement them yet, so I can't test it, but for the future should I prefer this:

int a[] = {1,2};
std::for_each(std::execution::par_unseq, std::begin(a), std::end(a), [](int& p) {
  ++p;
});

Or this:

for(auto& p : a) {
  ++p;
}
Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Michael Mahn
  • 737
  • 4
  • 11
  • 4
    The only way to know for sure is to use them and compare the generated code. Since you can't do that yet, IMO simplicity always comes first which means the range-based for loop. – Some programmer dude Oct 28 '18 at 18:36
  • For an array of two objects there is no benefit from running those increments in parallel. But there is also a third option: use the pre-C++17 version of `std::for_each`. – Pete Becker Oct 28 '18 at 21:11
  • 2
    @Someprogrammerdude: "*The only way to know for sure is to use them and compare the generated code.*" And that won't give you a useful answer. It will only be meaningful for that *specific* case (including the type of objects being iterated over), and it can change with compiler versions. Compilers will get better at implementing the parallelized `for`, but they will also get better at detecting when a range-`for` can be parallelized. – Nicol Bolas Oct 28 '18 at 23:19
  • @NicolBolas The [CppCoreGuidelines](http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#per-performance) see it like Some programmer dude. – Werner Henze Jan 13 '19 at 15:22
  • You could also ask this question on https://github.com/isocpp/CppCoreGuidelines/issues. But maybe this is already covered by a rule like Per.4 or Per.6. – Werner Henze Jan 13 '19 at 15:25

0 Answers0