The following code shown is used to calculate the inverse of a matrix by the Gauss Jordan method, halving memory accesses. This improves single thread execution time. The problem I'm having is that new data dependencies are created that prevent me from parallelizing. For example, for either loop K or loop i (the loop that has the conditions if i!=k ....).
for (k = 0; k < size; k += 2)
{
pivot = original[k][k];
for (j = 0; j < size; j++)
{
original[k][j] /= pivot;
inverse[k][j] /= pivot;
}
pivot = original[k + 1][k];
for (i = 0; i < size; i++)
{
original[k + 1][i] -= original[k][i] * pivot;
inverse[k + 1][i] -= inverse[k][i] * pivot;
}
pivot = original[k+1][k+1];
for (j = 0; j < size; j++)
{
original[k+1][j] /= pivot;
inverse[k+1][j] /= pivot;
}
for (i = 0; i < size; i++)
{
if (i != k && i != k + 1)
{
pivot = original[i][k];
for (j = 0; j < size; j++)
{
original[i][j] -= original[k][j] * pivot;
inverse[i][j] -= inverse[k][j] * pivot;
}
}
if (i != k + 1)
{
pivot = original[i][k+1];
for (j = 0; j < size; j++)
{
original[i][j] -= original[k + 1][j] * pivot;
inverse[i][j] -= inverse[k + 1][j] * pivot;
}
}
}
}
I suppose that we will have to make transformations to the code to eliminate data dependencies. And surely the code is parallelizable