-1

I m trying to do multi-thread programming on CPU using OpenMP. I have lots of for loops which are good candidate to be parallel. I attached here a part of my code. when I use first #pragma omp parallel for reduction, my code is faster, but when I try to use the same command to parallelize other loops it gets slower. does anyone have any idea why it is like this?

.
.
.

        omp_set_dynamic(0);
        omp_set_num_threads(4);

        float *h1=new float[nvi];
        float *h2=new float[npi];

        while(tol>0.001)
        {
            std::fill_n(h2, npi, 0);
            int k,i;
            float h222=0;
            #pragma omp parallel for private(i,k) reduction (+: h222)

            for (i=0;i<npi;++i)
            {   
                int p1=ppi[i];
            int m = frombus[p1];
                for (k=0;k<N;++k)
                {
                h222 +=  v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k]) 
                             + B[m-1][k]*sin(del[m-1]-del[k]));
                }
                h2[i]=h222;
            }

            //*********** h3*****************

            std::fill_n(h3, nqi, 0);
            float h333=0;

            #pragma omp parallel for private(i,k) reduction (+: h333) 

            for (int i=0;i<nqi;++i)
            {    
            int q1=qi[i];
            int m = frombus[q1];
                for (int k=0;k<N;++k)
                {
                    h333 += v[m-1]*v[k]*(G[m-1][k]*sin(del[m-1]-del[k]) 
                            - B[m-1][k]*cos(del[m-1]-del[k]));
                } 
                h3[i]=h333;
            }
            .
            .
            .
       }
Community
  • 1
  • 1
hadis
  • 9
  • 1
  • 4
  • 1
    I might help us a bit if you fixed your formatting first. – Mysticial Sep 09 '13 at 18:23
  • @Mysticial best I could do. Hope it brings even a little clarity =P – WhozCraig Sep 09 '13 at 18:25
  • 1
    If it's C++, tag as such. – P.P Sep 09 '13 at 18:28
  • Yes, this probably isn't C. – Jens Gustedt Sep 09 '13 at 18:54
  • Please clean up your code before posting here. Your second `omp parallel` is weird. You have a duplicate definition of `i1` and `k1` in different scopes. Probably it is better to do as `for` local variables, but then you don't need all that `private(i,k)` stuff. – Jens Gustedt Sep 09 '13 at 18:56
  • In addition, you have a dependency on `i` on `h222`. `h222` for `i=1` depends on the value of `h222` at `i=0`. You can't parallelize like that. You need to change your algorithm. You could, for example, write out the values of `h222` in parallel for each `i` and then take care of the dependency in serial afterwards. – Z boson Sep 09 '13 at 20:16

2 Answers2

1

I don't think your OpenMP code gives the same result as without OpenMP. Let's just concentrate on the h2[i] part of the code (since the h3[i] has the same logic). There is a dependency of h2[i] on the index i (i.e. h2[1] = h2[1] + h2[0]). The OpenMP reduction you're doing won't give the correct result. If you want to do the reduction with OpenMP you need do it on the inner loop like this:

float h222 = 0;
for (int i=0; i<npi; ++i) {
    int p1=ppi[i];
    int m = frombus[p1];        
    #pragma omp parallel for reduction(+:h222)
    for (int k=0;k<N; ++k) {
        h222 +=  v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k]) 
                         + B[m-1][k]*sin(del[m-1]-del[k]));
    }
    h2[i] = h222;
}

However, I don't know if that will be very efficient. An alternative method is fill h2[i] in parallel on the outer loop without a reduction and then take care of the dependency in serial. Even though the serial loop is not parallelized it still should have a small effect on the computation time since it does not have the inner loop over k. This should give the same result with and without OpenMP and still be fast.

#pragma omp parallel for
for (int i=0; i<npi; ++i) {
    int p1=ppi[i];
    int m = frombus[p1];
    float h222 = 0;
    for (int k=0;k<N; ++k) {
        h222 +=  v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k]) 
                         + B[m-1][k]*sin(del[m-1]-del[k]));
    }
    h2[i] = h222;
}
//take care of the dependency serially
for(int i=1; i<npi; i++) {
    h2[i] += h2[i-1];
}    
Z boson
  • 32,619
  • 11
  • 123
  • 226
0

Keep in mind that creating and destroying threads is a time consuming process; clock the execution time for the process and see for yourself. You only use parallel reduction twice which may be faster than a serial reduction, however the initial cost of creating the threads may still be higher. Try parallelizing the outer most loop (if possible) to see if you can obtain a speedup.

namu
  • 205
  • 2
  • 8