0

Much troubled by these two questions when implementing my openMP programs

Q1: when does the parallel region and different construct stop?

OpenMP seems to promote using {} as the separator between construct or parallel regions, it can sometimes get confusing or against its original intention when conflicting with the {} used by for loop or in cases we purposely choose not to use it for code simplicity

example 1:

int main() {
int i, j;
int t =0;
int a[sizeA];
for (i=0;i<sizeA;i++)
 a[i] =1;

 double elapsed =-omp_get_wtime();

 #pragma omp parallel for reduction(+: t)
  for(j=0; j<sizeA; j++)
    t=t+a[j];
  --------------------1-----------------------------------------------------
  #pragma omp master     
   printf("The sum of the array %d\n", t);

---------------------2-------------------------------------------------------
  elapsed+=omp_get_wtime();
  printf("The sum of the array In [REDUCTION] is %d: \n", t);
  printf("The time cost is %f: \n", elapsed);   
 -----------------------------3-------------------------------------- 
}

In the above example, does the parallel region stop at 1 or 2 or 3 (as marked in the program)? According to the test result, it stops at location 2. cause section between 2-3 is executed only once, I find this rather confusing, why this?

I am also quite against the use of combined directive like #pragma omp parallel for bla bla, which messed the situation even more, the same code, a little different, added {} for for loop

   #pragma omp parallel for reduction(+: t)
  for(j=0; j<sizeA; j++)
  { //================difference added here================
    t=t+a[j];
    printf("hi, everyone\n");
  } //===============difference added here ==================

  //--------------------1-----------------------------------------------------
  #pragma omp master     
   printf("The sum of the array %d\n", t);

 //---------------------2-------------------------------------------------------
  elapsed+=omp_get_wtime();
  printf("The sum of the array In [REDUCTION] is %d: \n", t);
  printf("The time cost is %f: \n", elapsed);   
 // -----------------------------3-------------------------------------- 
}

In the second example, does the parallel region stops in 1? in 2? if I want to make the parallel region include the #pragma omp master construct, do I have to add extra brackets for the parallel region? and consequently, break the combined directive#pragma omp parallel for, like folloing: or there is a better way(if any, would be super happy)

   #pragma omp parallel 
  {
    #pragma omp for reduction(+: t)
     for(j=0; j<sizeA; j++)
     { 
       t=t+a[j];
       printf("hi, everyone\n");
     } 

    #pragma omp master     
      bla bla
  }

===================================================================== **Q2: which kinds of construct can rest inside the same parallel region? **

Like in the first example, #pragma omp for and #pragma omp master share the same parallel region by default, however, anything following the #pragma omp master is not even though there is no {} explicitly saying this, what kind of construct can share the same parallel region? like working sharing construct vs Synchronization Constructs

Any reference on this?

Many thanks!

sthbuilder
  • 561
  • 1
  • 7
  • 22

1 Answers1

0

The statement

#pragma omp parallel for reduction(+: t)

uses the next C statement to define the region for working in. At the start of the region multiple threads are spawned and the rendezvous at the end of the region.

#pragma omp master

and similar divide the enclosing region into parts. The part after that pragma is only run by the 'master' thread.

A '#for' within a parallel region uses the threads defined by the region it does not define the region itself but defines a number of jobs to do. Unless the 'nowait' clause is added there is a 'barrier' or synchronisation point at the end of the loop. With the 'nowait' clause other sections (eg an omp master) after the loop will run at the same time.

To emphasize, as soon as you add a

#pragma omp parallel 

You get multiple threads. The single statement or block after it is executed by every thread. Another "#pragma" is needed to limit which threads run which parts of the code.

EDIT: For example on my eight core only the "Hello World" is printed eight times.

#include <stdio.h>
#include <omp.h>
int main(){
  printf("Starting\n");
#pragma omp parallel
  printf("Hello World\n");
  printf("Split\n"); 
#pragma omp parallel
  {
#pragma omp master
    printf("OOps, really\n");
  }
  printf("Done\n");
}
user3710044
  • 2,261
  • 15
  • 15
  • this does not explain why example, after #master, the printf() runs only once, ibdicating the parallel stops – sthbuilder Feb 15 '15 at 08:48
  • In your examples there are two reasons (1) Your parallel region does not reach the #master because it only includes a single statement, (2) even if it did there is only one master thread so the '#master' section is only run once by that thread. Both of these are mentioned above. – user3710044 Feb 15 '15 at 08:59