5

I am trying to convert a buffer of bits, from 16 bits per pixel:

RGB 565: rrrrrggggggbbbb|rrr..

to 24 bits per pixel:

RGB888 rrrrrrrrgggggggbbbbbbb|rrr...

I have a quite optimized algorithm but I am quite curious of how can this be done using SSE. Seems a good candidate.Lets assume the input is a set of 16bpp, memory aligned and a size of 64x64 pixels as it fits perfectly, so a buffer of 64*64*16 and its to be converted to a buffer of 64*64*24.

If a load the initial buffer of colors (16bpp) on a __m128i registry (and then iterate) I can process 8 pixels each time. If use masks and shifts I can extract each component in different registries (pseudo code):

eg r_c:  

Input buffer c565
Ouput buffer c888

__m128i* ptr = (__m128i*)c565; // Original byte buffer rgb565
__m128i r_mask_16 = _mm_set_epi8(0xF8, 0, 0xF8...);
__m128i r_c = _mm_and_si128(*ptr, r_mask_16);

result:

__m128i r_c = [r0|0|r1|0|....r7|0]
__m128i g_c = [g0|0|g1|0|....g7|0]
__m128i b_c = [b0|0|r1|0|....b7|0]

But if I extract them manually it loses all its performance:

c888[0] = r_c[0];
c888[1] = g_c[0];
c888[2] = b_c[0];
c888[3] = r_c[1];
...

I suppose the correct way should be to join them in another registry and store it on c888 directly without doing each component separately. But not sure how I can do that efficiently, any thoughts?

Note: This question is not a duplicate of Optimizing RGB565 to RGB888 conversions with SSE2. Converting from RGB565 to ARGB8888 it is not the same as converting RGB565 to RGB888. The above question uses the instructions,

punpcklbw
punpckhbw

and these instructions work well when there are pairs (xmm (rb) xmm(ga) xmm (rgba) x2, as they take RB from one register xmm and GA from another and pack them in two xmms. But the case I am exposing is when you do not want the alpha component.

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
JoniPichoni
  • 239
  • 1
  • 11
  • 2
    Does it really have to be limited to SSE2? – harold Jul 02 '17 at 10:47
  • I think you can convert 2x16 bytes of input data into 3x16 bytes of output data in 20 instructions using SSSE3. You are seriously limiting yourself by SSE2... – stgatilov Jul 21 '17 at 15:42

1 Answers1

5

Unfortunately, SSE doesn't have a good way to write out packed 24bit integers, so we need to pack the pixel data ourselves.

24bpp pixels occupy 3 bytes per pixel but XMM register are 16 bytes, meaning that we need to process 3*16 pixels = 48 bytes at a time to not have to worry about only storing part of an XMM register.

First we need to load a vector of 16bpp data, then convert this into a pair of vectors of 32bpp data. I have done this by unpacking the data into a vector of uint32, then shifting and masking this vector to extract the red, green and blue channels. OR'ing these together is the final step in translating to 32bpp. This could be replaced with code from the linked question if that is faster, I haven't measured the performance of my solution.

Once we have converted 16 pixels into vectors of 32bpp pixels, these vectors need to be packed together and written to the result array. I chose to mask out each pixel individually and use _mm_bsrli_si128 and _mm_bslli_si128 to move it to the final position in each of the three result vectors. OR'ing each of these pixels together again gives the packed data, which is written to the result array.

I have tested that this code works, but I haven't done any performance measurements and I would not be surprised if there are faster ways to do this, especially if you allow yourself to use something beyond SSE2.

This writes the 24bpp data with the red channel as MSB.

#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <x86intrin.h>

#define SSE_ALIGN 16

int main(int argc, char *argv[]) {
    // Create a small test buffer
    // We process 16 pixels at a time, so size must be a multiple of 16
    size_t buf_size = 64;
    uint16_t *rgb565buf = aligned_alloc(SSE_ALIGN, buf_size * sizeof(uint16_t));

    // Fill it with recognizable data
    for (size_t i = 0; i < buf_size; i++) {
        uint8_t r = 0x1F & (i + 10);
        uint8_t g = 0x3F & i;
        uint8_t b = 0x1F & (i + 20);
        rgb565buf[i] = (r << 11) | (g << 5) | b;
    }

    // Create a buffer to hold the data after translation to 24bpp
    uint8_t *rgb888buf = aligned_alloc(SSE_ALIGN, buf_size * 3*sizeof(uint8_t));

    // Masks for extracting RGB channels
    const __m128i mask_r = _mm_set1_epi32(0x00F80000);
    const __m128i mask_g = _mm_set1_epi32(0x0000FC00);
    const __m128i mask_b = _mm_set1_epi32(0x000000F8);

    // Masks for extracting 24bpp pixels for the first 128b write
    const __m128i mask_0_1st  = _mm_set_epi32(0,          0,          0,          0x00FFFFFF);
    const __m128i mask_0_2nd  = _mm_set_epi32(0,          0,          0x0000FFFF, 0xFF000000);
    const __m128i mask_0_3rd  = _mm_set_epi32(0,          0x000000FF, 0xFFFF0000, 0         );
    const __m128i mask_0_4th  = _mm_set_epi32(0,          0xFFFFFF00, 0,          0         );
    const __m128i mask_0_5th  = _mm_set_epi32(0x00FFFFFF, 0,          0,          0         );
    const __m128i mask_0_6th  = _mm_set_epi32(0xFF000000, 0,          0,          0         ); 
    // Masks for the second write
    const __m128i mask_1_6th  = _mm_set_epi32(0,          0,          0,          0x0000FFFF);
    const __m128i mask_1_7th  = _mm_set_epi32(0,          0,          0x000000FF, 0xFFFF0000);
    const __m128i mask_1_8th  = _mm_set_epi32(0,          0,          0xFFFFFF00, 0         );
    const __m128i mask_1_9th  = _mm_set_epi32(0,          0x00FFFFFF, 0,          0         );
    const __m128i mask_1_10th = _mm_set_epi32(0x0000FFFF, 0xFF000000, 0,          0         );
    const __m128i mask_1_11th = _mm_set_epi32(0xFFFF0000, 0,          0,          0         );
    // Masks for the third write
    const __m128i mask_2_11th = _mm_set_epi32(0,          0,          0,          0x000000FF);
    const __m128i mask_2_12th = _mm_set_epi32(0,          0,          0,          0xFFFFFF00);
    const __m128i mask_2_13th = _mm_set_epi32(0,          0,          0x00FFFFFF, 0         );
    const __m128i mask_2_14th = _mm_set_epi32(0,          0x0000FFFF, 0xFF000000, 0         );
    const __m128i mask_2_15th = _mm_set_epi32(0x000000FF, 0xFFFF0000, 0,          0         );
    const __m128i mask_2_16th = _mm_set_epi32(0xFFFFFF00, 0,          0,          0         );

    // Convert the RGB565 data into RGB888 data
    __m128i *packed_rgb888_buf = (__m128i*)rgb888buf;
    for (size_t i = 0; i < buf_size; i += 16) {
        // Need to do 16 pixels at a time -> least number of 24bpp pixels that fit evenly in XMM register
        __m128i rgb565pix0_raw = _mm_load_si128((__m128i *)(&rgb565buf[i]));
        __m128i rgb565pix1_raw = _mm_load_si128((__m128i *)(&rgb565buf[i+8]));

        // Extend the 16b ints to 32b ints
        __m128i rgb565pix0lo_32b = _mm_unpacklo_epi16(rgb565pix0_raw, _mm_setzero_si128());
        __m128i rgb565pix0hi_32b = _mm_unpackhi_epi16(rgb565pix0_raw, _mm_setzero_si128());
        // Shift each color channel into the correct position and mask off the other bits
        __m128i rgb888pix0lo_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix0lo_32b, 8)); // Block 0 low pixels
        __m128i rgb888pix0lo_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix0lo_32b, 5));
        __m128i rgb888pix0lo_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix0lo_32b, 3));
        __m128i rgb888pix0hi_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix0hi_32b, 8)); // Block 0 high pixels
        __m128i rgb888pix0hi_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix0hi_32b, 5));
        __m128i rgb888pix0hi_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix0hi_32b, 3));
        // Combine each color channel into a single vector of four 32bpp pixels
        __m128i rgb888pix0lo_32b = _mm_or_si128(rgb888pix0lo_r, _mm_or_si128(rgb888pix0lo_g, rgb888pix0lo_b));
        __m128i rgb888pix0hi_32b = _mm_or_si128(rgb888pix0hi_r, _mm_or_si128(rgb888pix0hi_g, rgb888pix0hi_b));

        // Same thing as above for the next block of pixels
        __m128i rgb565pix1lo_32b = _mm_unpacklo_epi16(rgb565pix1_raw, _mm_setzero_si128());
        __m128i rgb565pix1hi_32b = _mm_unpackhi_epi16(rgb565pix1_raw, _mm_setzero_si128());
        __m128i rgb888pix1lo_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix1lo_32b, 8)); // Block 1 low pixels
        __m128i rgb888pix1lo_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix1lo_32b, 5));
        __m128i rgb888pix1lo_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix1lo_32b, 3));
        __m128i rgb888pix1hi_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix1hi_32b, 8)); // Block 1 high pixels
        __m128i rgb888pix1hi_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix1hi_32b, 5));
        __m128i rgb888pix1hi_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix1hi_32b, 3));
        __m128i rgb888pix1lo_32b = _mm_or_si128(rgb888pix1lo_r, _mm_or_si128(rgb888pix1lo_g, rgb888pix1lo_b));
        __m128i rgb888pix1hi_32b = _mm_or_si128(rgb888pix1hi_r, _mm_or_si128(rgb888pix1hi_g, rgb888pix1hi_b));

        // At this point, rgb888pix_32b contains the pixel data in 32bpp format, need to compress it to 24bpp
        // Use the _mm_bs*li_si128(__m128i, int) intrinsic to shift each 24bpp pixel into it's final position
        // ...then mask off the other pixels and combine the result together with or
        __m128i pix_0_1st = _mm_and_si128(mask_0_1st,                 rgb888pix0lo_32b     ); // First 4 pixels
        __m128i pix_0_2nd = _mm_and_si128(mask_0_2nd, _mm_bsrli_si128(rgb888pix0lo_32b, 1 ));
        __m128i pix_0_3rd = _mm_and_si128(mask_0_3rd, _mm_bsrli_si128(rgb888pix0lo_32b, 2 ));
        __m128i pix_0_4th = _mm_and_si128(mask_0_4th, _mm_bsrli_si128(rgb888pix0lo_32b, 3 ));
        __m128i pix_0_5th = _mm_and_si128(mask_0_5th, _mm_bslli_si128(rgb888pix0hi_32b, 12)); // Second 4 pixels
        __m128i pix_0_6th = _mm_and_si128(mask_0_6th, _mm_bslli_si128(rgb888pix0hi_32b, 11));
        // Combine each piece of 24bpp pixel data into a single 128b variable
        __m128i pix128_0 = _mm_or_si128(_mm_or_si128(_mm_or_si128(pix_0_1st, pix_0_2nd), pix_0_3rd), 
                                        _mm_or_si128(_mm_or_si128(pix_0_4th, pix_0_5th), pix_0_6th));
        _mm_store_si128(packed_rgb888_buf, pix128_0);

        // Repeat the same for the second 128b write
        __m128i pix_1_6th  = _mm_and_si128(mask_1_6th,  _mm_bsrli_si128(rgb888pix0hi_32b, 5 ));
        __m128i pix_1_7th  = _mm_and_si128(mask_1_7th,  _mm_bsrli_si128(rgb888pix0hi_32b, 6 ));
        __m128i pix_1_8th  = _mm_and_si128(mask_1_8th,  _mm_bsrli_si128(rgb888pix0hi_32b, 7 ));
        __m128i pix_1_9th  = _mm_and_si128(mask_1_9th,  _mm_bslli_si128(rgb888pix1lo_32b, 8 )); // Third 4 pixels
        __m128i pix_1_10th = _mm_and_si128(mask_1_10th, _mm_bslli_si128(rgb888pix1lo_32b, 7 ));
        __m128i pix_1_11th = _mm_and_si128(mask_1_11th, _mm_bslli_si128(rgb888pix1lo_32b, 6 ));
        __m128i pix128_1 = _mm_or_si128(_mm_or_si128(_mm_or_si128(pix_1_6th, pix_1_7th),  pix_1_8th ), 
                                        _mm_or_si128(_mm_or_si128(pix_1_9th, pix_1_10th), pix_1_11th));
        _mm_store_si128(packed_rgb888_buf+1, pix128_1);

        // And again for the third 128b write
        __m128i pix_2_11th = _mm_and_si128(mask_2_11th, _mm_bsrli_si128(rgb888pix1lo_32b, 10));
        __m128i pix_2_12th = _mm_and_si128(mask_2_12th, _mm_bsrli_si128(rgb888pix1lo_32b, 11));
        __m128i pix_2_13th = _mm_and_si128(mask_2_13th, _mm_bslli_si128(rgb888pix1hi_32b,  4)); // Fourth 4 pixels
        __m128i pix_2_14th = _mm_and_si128(mask_2_14th, _mm_bslli_si128(rgb888pix1hi_32b,  3));
        __m128i pix_2_15th = _mm_and_si128(mask_2_15th, _mm_bslli_si128(rgb888pix1hi_32b,  2));
        __m128i pix_2_16th = _mm_and_si128(mask_2_16th, _mm_bslli_si128(rgb888pix1hi_32b,  1));
        __m128i pix128_2 = _mm_or_si128(_mm_or_si128(_mm_or_si128(pix_2_11th, pix_2_12th), pix_2_13th), 
                                        _mm_or_si128(_mm_or_si128(pix_2_14th, pix_2_15th), pix_2_16th));
        _mm_store_si128(packed_rgb888_buf+2, pix128_2);

        // Update pointer for next iteration
        packed_rgb888_buf += 3;
    }

    for (int i = 0; i < buf_size; i++) {
        uint8_t r565 = (i + 10) & 0x1F;
        uint8_t g565 = i & 0x3F;
        uint8_t b565 = (i + 20) & 0x1F;
        printf("%2d] RGB = (%02x,%02x,%02x), should be (%02x,%02x,%02x)\n", i, rgb888buf[3*i+2], 
                rgb888buf[3*i+1], rgb888buf[3*i], r565 << 3, g565 << 2, b565 << 3);
    }

    return EXIT_SUCCESS;
}

EDIT: Here is a second way to compress the 32bpp pixel data into 24bpp. I haven't tested if it is faster or not, although I would assume so because it executes fewer instructions and doesn't need to run a tree of OR's at the end. It is less clear how it works at a glance however.

In this version, a combination of shifts and shuffles is used to move each block of pixels together, rather than masking out and shifting each individually. The method used to convert 16bpp into 32bpp is unchanged.

First, I define a helper function to shift left the low uint32 in each half of an __m128i.

__m128i bslli_low_dword_once(__m128i x) {
    // Multiply low dwords by 256 to shift right 8 bits
    const __m128i shift_multiplier = _mm_set1_epi32(1<<8);
    // Mask off the high dwords
    const __m128i mask = _mm_set_epi32(0xFFFFFFFF, 0, 0xFFFFFFFF, 0);

    return _mm_or_si128(_mm_and_si128(x, mask), _mm_mul_epu32(x, shift_multiplier));
}

Then the only other changes are to the code to pack the 32bpp data into 24bpp.

// At this point, rgb888pix_32b contains the pixel data in 32bpp format, need to compress it to 24bpp
__m128i pix_0_block0lo = bslli_low_dword_once(rgb888pix0lo_32b);
        pix_0_block0lo = _mm_srli_epi64(pix_0_block0lo, 8);
        pix_0_block0lo = _mm_shufflelo_epi16(pix_0_block0lo, _MM_SHUFFLE(2, 1, 0, 3));
        pix_0_block0lo = _mm_bsrli_si128(pix_0_block0lo, 2);

__m128i pix_0_block0hi = _mm_unpacklo_epi64(_mm_setzero_si128(), rgb888pix0hi_32b);
        pix_0_block0hi = bslli_low_dword_once(pix_0_block0hi);
        pix_0_block0hi = _mm_bslli_si128(pix_0_block0hi, 3);

__m128i pix128_0 = _mm_or_si128(pix_0_block0lo, pix_0_block0hi);
_mm_store_si128(packed_rgb888_buf, pix128_0);

// Do the same basic thing for the next 128b chunk of pixel data
__m128i pix_1_block0hi = bslli_low_dword_once(rgb888pix0hi_32b);
        pix_1_block0hi = _mm_srli_epi64(pix_1_block0hi, 8);
        pix_1_block0hi = _mm_shufflelo_epi16(pix_1_block0hi, _MM_SHUFFLE(2, 1, 0, 3));
        pix_1_block0hi = _mm_bsrli_si128(pix_1_block0hi, 6);

__m128i pix_1_block1lo = bslli_low_dword_once(rgb888pix1lo_32b);
        pix_1_block1lo = _mm_srli_epi64(pix_1_block1lo, 8);
        pix_1_block1lo = _mm_shufflelo_epi16(pix_1_block1lo, _MM_SHUFFLE(2, 1, 0, 3));
        pix_1_block1lo = _mm_bslli_si128(pix_1_block1lo, 6);

__m128i pix128_1 = _mm_or_si128(pix_1_block0hi, pix_1_block1lo);
_mm_store_si128(packed_rgb888_buf+1, pix128_1);

// And again for the final chunk
__m128i pix_2_block1lo = bslli_low_dword_once(rgb888pix1lo_32b);
        pix_2_block1lo = _mm_bsrli_si128(pix_2_block1lo, 11);

__m128i pix_2_block1hi = bslli_low_dword_once(rgb888pix1hi_32b);
        pix_2_block1hi = _mm_srli_epi64(pix_2_block1hi, 8);
        pix_2_block1hi = _mm_shufflelo_epi16(pix_2_block1hi, _MM_SHUFFLE(2, 1, 0, 3));
        pix_2_block1hi = _mm_bslli_si128(pix_2_block1hi, 2);

__m128i pix128_2 = _mm_or_si128(pix_2_block1lo, pix_2_block1hi);
_mm_store_si128(packed_rgb888_buf+2, pix128_2);
  • Yes, the second version looks much faster. `_mm_shufflelo_epi16` / `_mm_shufflehi_epi16` can indeed be useful for messing with pixels when you don't have SSSE3 `_mm_shufflehi_epi8`. You might be able to use some AND-masking to avoid having to byte-shift and then bit-shift. (PAND can run on more ports than PSRLQ, but the latency/throughput are similar on most CPUs.) – Peter Cordes Jul 08 '17 at 11:39
  • 1
    That's an interesting use of `_mm_mul_epu32` (`pmuludq`). It needs the same port (p0) as a bit-shift on Intel SnB and Haswell, but has much higher latency (5 instead of 1). Still, it can be a throughput win since you don't have to AND to zero the high 64b of each half, and throughput is what matters when doing this independently for many vectors of pixels. – Peter Cordes Jul 08 '17 at 11:42
  • 1
    @PeterCordes I didn't notice that the latency of `pmuludq` was so high on Intel CPUs. On AMD's Bulldozer the latency of the sequence `pmuludq + por` is the same as for `pslld + pand + por`, while on K10 the former is 1 cycle better. My dev machine is K10, so I didn't think to check if the latency was significantly different on Intel. – user8243310 Jul 08 '17 at 16:16
  • Oh hmm, that's funky. Slightly lower latency on the multiply, and 2c instead of 1c for other instructions, really makes a difference vs. Intel. AMD Ryzen finally has 1c latency for common/simple vec-int instructions, so it's more like Intel for this, according to Agner Fog's spreadsheet. – Peter Cordes Jul 08 '17 at 16:23
  • Yeah, it looks like K8/K10 is pretty much the only CPU where `pmuludq` could be a latency win. I would imagine OP is targeting SSE2 because it is guaranteed to be present, so it would probably be better to work with the assumption that `pmuludq` has higher latency. If OP had specified why he needs SSE2 then choosing the correct sequence would be easier – user8243310 Jul 08 '17 at 16:30
  • 1
    It's still a throughput win on Intel, and they have a big enough out-of-order window to hide the latency and find ILP across loop iterations, so I think `pmuludq` is the correct choice for this even on Intel. – Peter Cordes Jul 08 '17 at 16:34