0

I'm trying to calculate the fraction P of my code which can be parallelized, to apply Amdahl's Law and observe the theoretical maximum speedup.

My code spends most of its time on multiplying matrices (using the library Eigen). Should I consider this part entirely parallelizable?

lodhb
  • 929
  • 2
  • 12
  • 29

2 Answers2

0

If your your matrices are large enough, let's say larger than 60, then you can compile with OpenMP enabled (e.g., -fopenmp with gcc) and the products will be parallelized for you. However, it is often better to parallelize at the highest level as possible, especially if the matrices are not very large. Then it depends whether you can identify independent tasks in your algorithm.

ggael
  • 28,425
  • 2
  • 65
  • 71
0

First, it would be appropriate to consider how the Eigen library is handling the matrix multiplication.

Then, a matrix(mxn)-vector(nx1) multiplication without Eigen could be written like this:

1  void mxv(int m, int n, double* a, double* b, double* c)
2  { //a=bxc
3    int i, j;
4
5    for (i=0; i<m; i++)
6    {
7      a[i] = 0.0;
8      for (j=0; j<n; j++)
9        a[i] += b[i*n+j]*c[j];
10   }
11 }

As you can see, since no two products compute the same element of the result vector a[] and since the order in which the values for the elements a[i] for i=0...m are calculated does not affect the correctness of the answer, these computations can be carried out independently over the index value of i.

Then a loop like the previous one is entirely parallelizable. It would be relatively straightforward using OpenMP for parallel-implementation purposes on such loops.

L30nardo SV.
  • 323
  • 1
  • 3
  • 14