0

I want help to figure out automatic parallelization with error - Loop not parallelized: may not be beneficial. I want to test this code for parallelization, but I don't know, how to make the code effective for compiler to parallelizie it.

Here is the code:

   for (i = 0; i < piece_length; i++) {
       x=(i/(double)piece_length)+piece/(float)2;
      // if(x<=1.0){
           integral=4/(1+x*x);
           sum=sum+integral;
      // }  

    }

Loop not parallelized: may not be beneficial

Do you know how to make this loop more time demanding to be able to accept automatic parallelization?

Thx

Waypoint
  • 17,283
  • 39
  • 116
  • 170

1 Answers1

0

The result that your are accumulating in sum is dependent on the order of computation. I imagine that pgcc must have a way that you can tell that you don't care about the implications that reordering might have. But as such it can't know and it can't parallelize anything.

In OpenMp you would put something like

#pragma omp parallel for reduction(+: sum)

in front of the loop.

Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177
  • Thanks for reply, I know pragma, but I was wondering about auto-paralellization... bud in this case it looks bad with it... yea, I see line: sum=sum+integral; which is making application dependent, but there must be way how to solve this... – Waypoint Mar 20 '11 at 07:37
  • @Hmyzak, you can't expect wonders, automatic parallelization here is impossible. You have to loosen the requirements to make this possible. – Jens Gustedt Mar 20 '11 at 07:40