The following loop is executed hundreds of times.
elma and elmc are both unsigned long (64-bit) arrays, so is res1 and res2.
unsigned long simdstore[2];
__m128i *p, simda, simdb, simdc;
p = (__m128i *) simdstore;
for (i = 0; i < _polylen; i++)
{
u1 = (elma[i] >> l) & 15;
u2 = (elmc[i] >> l) & 15;
for (k = 0; k < 20; k++)
{
1. //res1[i + k] ^= _mulpre1[u1][k];
2. //res2[i + k] ^= _mulpre2[u2][k];
3. _mm_prefetch ((const void *) &_mulpre2[u2][k], _MM_HINT_T0);
4. _mm_prefetch ((const void *) &_mulpre1[u1][k], _MM_HINT_T0);
5. simda = _mm_set_epi64x (_mulpre2[u2][k], _mulpre1[u1][k]);
6. _mm_prefetch ((const void *) &res2[i + k], _MM_HINT_T0);
7. _mm_prefetch ((const void *) &res1[i + k], _MM_HINT_T0);
8. simdb = _mm_set_epi64x (res2[i + k], res1[i + k]);
9. simdc = _mm_xor_si128 (simda, simdb);
10. _mm_store_si128 (p, simdc);
11. res1[i + k] = simdstore[0];
12. res2[i + k] = simdstore[1];
}
}
Within the for loop, scalar version of code (commented) runs twice as faster than simd code. With cachegrind output (Instruction reads) of the above lines is mentioned below.
Line 1: 668,460,000 2 2
Line 2: 668,460,000 1 1
Line 3: 89,985,000 1 1
Line 4: 89,985,000 1 1
Line 5: 617,040,000 2 2
Line 6: 44,992,500 0 0
Line 7: 44,992,500 0 0
Line 8: 539,910,000 1 1
Line 9: 128,550,000 0 0
Line 10: . . .
Line 11: 205,680,000 0 0
Line 12: 205,680,000 0 0
From the above figure, it appears that the commented (scalar code) requires significantly less number of instructions than simd code.
How can this code be made faster?