0

I recently started studying parallel coding, I'm still at the beginning so I wanted to try some very simple coding. Since it is in my interest to perform parallel numerical integration I started with a simple summation Fortran code:

program par_hello_world

use omp_lib

implicit none

integer, parameter:: bign = 1000000000
integer:: i

double precision:: start, finish, start1, finish1, a

a = 0

call cpu_time(start)

!$OMP PARALLEL num_threads(8)

  !$OMP DO REDUCTION(+:a)

    do i = 1,bign


      a = a + sqrt(1.0**5)

    end do

  !$OMP END DO

!$OMP END PARALLEL

call cpu_time(finish)

print*, 'parallel result:'

print*, a

print*, (finish-start)


a=0

call cpu_time(start1)

do i = 1,bign

  a = a + sqrt(1.0**5)

end do

call cpu_time(finish1)

print*, 'sequential result:'

print*, a

print*, (finish1-start1)

end program

The code basically simulates a summation, I used the weird expression sqrt(1.0**5) to have a measurable computational time, if I used just 1 the computational time was so small that i could not compare the sequential code with the parallel. I tried to avoid the race condition by using the REDUCTION clause.

However I'm getting very strange time results:

  1. If I raise the number of threads from 2 to 16 I don't get a reduction of computational time but somehow I even get an increase.
  2. Incredibly it seems that also the sequential code is influenced by the choice of the threads number (I really don't understand why!) in particular it is raised if I raise the number of threads.
  3. I get the correct result for the variable a

I think I'm doing something very wrong somewhere, but I'm clueless about it...

SSC Napoli
  • 131
  • 1
  • 5
  • Hi, thank you. By improving readability you mean to put the code in "code format" ? – SSC Napoli Dec 06 '14 at 14:28
  • Your computation is probably to trivial to be worth parallelizing You are just adding a constant. The `sqrt(1.0**5)` is computed at compile time. – Vladimir F Героям слава Dec 06 '14 at 14:40
  • @VladimirF actually the do loop computational time is more or less 2 seconds... Also i really can't explain why when i change the number of threads also the sequential computing time changes... – SSC Napoli Dec 06 '14 at 15:33
  • @VladimirF thank you, anyway, in general when I parallelize a numerical integration should I have to expect a decrease of computational time proportional to 1/Np where Np is the number of processors? – SSC Napoli Dec 06 '14 at 15:39
  • @VladimirF I'm also starting to think that calling cpu_time while using parallel computing give raise to strange timing results... – SSC Napoli Dec 06 '14 at 16:38
  • 2
    Oh yes! Why I didn't notice? Forget `cpu_time()` just now. Use `system_clock()` or `omp_get_wtime()`. This issue has been raised here MANY times. – Vladimir F Героям слава Dec 06 '14 at 16:41

0 Answers0