4

I am trying to optimize a function using SSE2. I'm wondering if I can prepare the data for my assembly code better than this way. My source data is a bunch of unsigned chars from pSrcData. I copy it to this array of floats, as my calculation needs to happen in float.


unsigned char *pSrcData = GetSourceDataPointer();

__declspec(align(16)) float vVectX[4];

vVectX[0] = (float)pSrcData[0];
vVectX[1] = (float)pSrcData[2];
vVectX[2] = (float)pSrcData[4];
vVectX[3] = (float)pSrcData[6];

__asm 
{
     movaps xmm0, [vVectX]
     [...]  // do some floating point calculations on float vectors using addps, mulps, etc
}

Is there a quicker way for me to cast every other byte of pSrcData to a float and store it into vVectX?

Thanks!

Paul R
  • 208,748
  • 37
  • 389
  • 560
Warpin
  • 6,971
  • 12
  • 51
  • 77

2 Answers2

5

(1) AND with a mask to zero out the odd bytes (PAND)

(2) Unpack from 16 bits to 32 bits (PUNPCKLWD with a zero vector)

(3) Convert 32 bit ints to floats (CVTDQ2PS)

Three instructions.

Paul R
  • 208,748
  • 37
  • 389
  • 560
2

Super old thread I realise, but I was searching for code myself to do this. This is my solution, which I think is simpler:

#include <immintrin.h>
#include <stdint.h>

#ifdef __AVX__
// Modified from http://stackoverflow.com/questions/16031149/speedup-a-short-to-float-cast
// Convert unsigned 8 bit integer to  float. Length must be multiple of 8
int  avxu8tof32(uint8_t *src, float *dest, int length) {
  int i;

  for (i=0; i<length; i+= 8) {

    //  Load 8 8-bit int into the low half of a 128 register
    __m128i v = _mm_loadl_epi64 ((__m128i const*)(src+i));

    //  Convert to 32-bit integers
    __m256i v32 = _mm256_cvtepu8_epi32(v);

    //  Convert to float
    __m256 vf = _mm256_cvtepi32_ps (v32);

    //  Store
    _mm256_store_ps(dest + i,vf);
  }
  return(0);
}
#endif

However benchmarking shows it no faster than just looping over the array in C, with compiler optimisation enabled. Maybe the approach will be more useful as the initial stage of a bunch of AVX computations.

Chris
  • 852
  • 1
  • 8
  • 19
  • The OP only wants every *other* `uint8_t` as a `float`. With AVX2, probably the best be is an `__m128i` `_mm_and_si128` and then `_mm256_cvtepu16_epi32` on that. Or if you're going to later pack back to `uint8_t`, maybe a 256b `and` and then unpack lo/hi (against zero) in-lane to go from 16b to 32b integer elements before conversion to FP. That avoids any lane-crossing shuffles (like `vpmovzx ymm`), and will avoid needing the inverse shuffle for packing again. – Peter Cordes Sep 21 '17 at 07:39
  • And yes, you'd want to do this on the fly before something you manually vectorized. Compilers can auto-vectorize simple copy+convert loops. – Peter Cordes Sep 21 '17 at 07:42
  • Thanks Peter - I missed that entirely – Chris Sep 22 '17 at 09:58