I have googled a bit to clarify what 10 bits RGB could mean.
On Wikipedia Color Depth – Deep color (30/36/48-bit) I found:
Some earlier systems placed three 10-bit channels in a 32-bit word, with 2 bits unused (or used as a 4-level alpha channel).
which seemed me the most reasonable.
Going with this, there are 10 bits for Red, 10 bits for Green, and 10 bits for Blue, + 2 bits unused (or reserved for Alpha).
This leaves two questions open:
Is it stored RGBa or BGRa or aRGB? (I believe that I've seen all these variations in the past.)
Has the composed value to be stored Little-Endian or Big-Endian?
When this hit me in practical work, I made an implementation based on an assumption, rendered some test pattern, checked whether it looks as expected and if not swapped the resp. parts in the implementation. Nothing, I'm proud of but, IMHO, I got the expected results with least effort.
So, assuming I've a color stored as RGB triple with component values in range [0, 1], the following function converts it to aRGB:
uint32_t makeRGB30(float r, float g, float b)
{
const uint32_t mask = (1u << 10u) - 1u;
/* convert float -> uint */
uint32_t rU = r * mask, gU = g * mask, bU = b * mask;
/* combine and return color components */
return ((rU & mask) << 20) | ((gU & mask) << 10) | (bU & mask);
}
This results in values with the following bit layout:
aaRRRRRR.RRRRGGGG.GGGGGGBB.BBBBBBBB
A small sample for demo:
#include <stdint.h>
#include <stdio.h>
uint32_t makeRGB30(float r, float g, float b)
{
const uint32_t mask = (1u << 10u) - 1u;
/* convert float -> uint */
uint32_t rU = r * mask, gU = g * mask, bU = b * mask;
/* combine and return color components */
return ((rU & mask) << 20) | ((gU & mask) << 10) | (bU & mask);
}
int main(void)
{
/* samples */
const float colors[][3] = {
{ 0.0f, 0.0f, 0.0f }, /* black */
{ 1.0f, 0.0f, 0.0f }, /* red */
{ 0.0f, 1.0f, 0.0f }, /* green */
{ 0.0f, 0.0f, 1.0f }, /* blue */
{ 1.0f, 1.0f, 0.0f }, /* yellow */
{ 1.0f, 0.0f, 1.0f }, /* magenta */
{ 0.0f, 1.0f, 1.0f }, /* cyan */
{ 1.0f, 1.0f, 1.0f } /* white */
};
const size_t n = sizeof colors / sizeof *colors;
for (size_t i = 0; i < n; ++i) {
float *color = colors[i];
uint32_t rgb = makeRGB30(color[0], color[1], color[2]);
printf("(%f, %f, %f): %08x\n", color[0], color[1], color[2], rgb);
}
/* done */
return 0;
}
Output:
(0.000000, 0.000000, 0.000000): 00000000
(1.000000, 0.000000, 0.000000): 3ff00000
(0.000000, 1.000000, 0.000000): 000ffc00
(0.000000, 0.000000, 1.000000): 000003ff
(1.000000, 1.000000, 0.000000): 3ffffc00
(1.000000, 0.000000, 1.000000): 3ff003ff
(0.000000, 1.000000, 1.000000): 000fffff
(1.000000, 1.000000, 1.000000): 3fffffff
Live Demo on ideone