Given 2 RGB colors stored as 32bit ints (8bits of alpha can be ignored or set to 0xff
).
Whats the fastest way to blend them using a 3rd integer from 0-255.
Here is a naive implementation which simply interpolates the values as ints:
int32_t rgb_blend(uint32_t src, uint32_t dst, uint32_t blend) {
const uint32_t iblend = 255 - blend;
union {
uint32_t u32;
struct { uint8_t
#if defined(__LITTLE_ENDIAN__)
a, b, g, r;
#else
r, g, b, a;
#endif
} u8;
} out, *s = (const void *)&src, *d = (const void *)&dst;
out.u8.r = (uint8_t)((((uint32_t)s->u8.r * iblend) + ((uint32_t)d->u8.r * blend)) / 255);
out.u8.g = (uint8_t)((((uint32_t)s->u8.g * iblend) + ((uint32_t)d->u8.g * blend)) / 255);
out.u8.b = (uint8_t)((((uint32_t)s->u8.b * iblend) + ((uint32_t)d->u8.b * blend)) / 255);
out.u8.a = 0xff;
return out.u32;
}
This doesn't have to be totally accurate, some rounding bias to get some extra performance is fine. (i / 256)
rounds down for eg but can be replaced with (((i * 2) + 255) / (2 * 255)
to round at 0.5
while only using integer operations.
Notes:
- this question is similar, but am asking about RGB colors, not RGBA alpha blending.
- While the question isn't architecture spesific, would be interested in common architectures - AMD-64, ARM-64 for eg.