1

I have a lot of trouble to find any developer documentation on how to implement in C a 10 bits RGB output for displays, mainly for Xorg/Wayland on Linux and compatible with Windows, if possible.

Currently, the application I'm working on (darktable) is using uint8_t to output RGB values. What would be the type for 10 bits uint? Is there any way to check for 10 bits support of the GPU/codec from the code?

Biffen
  • 6,249
  • 6
  • 28
  • 36
Aurélien Pierre
  • 604
  • 1
  • 6
  • 20
  • Does your display *literally* use 10 bits per pixel, or -- as in 15-bit RGB -- does it have a number of unused bits? – Jongware Nov 04 '18 at 10:39
  • It has to has unused bits. The question is how the three colors are represented. – Matthieu Brucher Nov 04 '18 at 10:44
  • X11, Wayland and Windows are three very different beasts. It doesn't make much sense to even talk about tgem in one question. Having said that, what API are you having trouble with? – n. m. could be an AI Nov 04 '18 at 10:54
  • Isn't this obvious? A color is stored in a 32 bit value, 10 bits each for R, G and B. If you need a specific component, mask and shift the desired 10 bits. – Jens Nov 04 '18 at 10:56
  • nvidia-settings says 30 bits, the spec sheet of the screen says 30 bits. I have no information about the data representation. I don't have any problem with any API since I have no API at all. I don't know where to begin. I have Googled every possible combination of 10 bits + Xorg + implementation + C + … – Aurélien Pierre Nov 04 '18 at 11:08
  • "the application I'm working on (darktable) is using uint8_t to output RGB values" How is it doing that? – n. m. could be an AI Nov 04 '18 at 11:10
  • it uses cairo and gtk stack and gdk pixbufs – Aurélien Pierre Nov 04 '18 at 11:19
  • 2
    OK so these are your APIs. Cairo surfaces may use CAIRO_FORMAT_RGB30, which stores red, green, and blue components in successive 10-bit quantities inside a 32-bit word, in that order, lower to upper bits, with the 2 uppermost bits unused. See https://cairographics.org/manual/cairo-Image-Surfaces.html#cairo-format-t. AFAIK gdk pixbufs do not expose their internal format, they read and write either xpm (not really suitable for large high-colour images) or "real" image formats such as JPEG, so it isn't quite clear what exactly is the problem with them. – n. m. could be an AI Nov 04 '18 at 12:28
  • 1
    See https://stackoverflow.com/questions/7704670/what-image-formats-are-supported-by-gdk-pixbuf-gtk-image-by-default for the list of formats supported by gdk pixbufs. – n. m. could be an AI Nov 04 '18 at 12:31

2 Answers2

2

I have googled a bit to clarify what 10 bits RGB could mean.

On Wikipedia Color Depth – Deep color (30/36/48-bit) I found:

Some earlier systems placed three 10-bit channels in a 32-bit word, with 2 bits unused (or used as a 4-level alpha channel).

which seemed me the most reasonable.

Going with this, there are 10 bits for Red, 10 bits for Green, and 10 bits for Blue, + 2 bits unused (or reserved for Alpha).

This leaves two questions open:

  1. Is it stored RGBa or BGRa or aRGB? (I believe that I've seen all these variations in the past.)

  2. Has the composed value to be stored Little-Endian or Big-Endian?

When this hit me in practical work, I made an implementation based on an assumption, rendered some test pattern, checked whether it looks as expected and if not swapped the resp. parts in the implementation. Nothing, I'm proud of but, IMHO, I got the expected results with least effort.

So, assuming I've a color stored as RGB triple with component values in range [0, 1], the following function converts it to aRGB:

uint32_t makeRGB30(float r, float g, float b)
{
  const uint32_t mask = (1u << 10u) - 1u;
  /* convert float -> uint */
  uint32_t rU = r * mask, gU = g * mask, bU = b * mask;
  /* combine and return color components */
  return ((rU & mask) << 20) | ((gU & mask) << 10) | (bU & mask);
}

This results in values with the following bit layout:

aaRRRRRR.RRRRGGGG.GGGGGGBB.BBBBBBBB

A small sample for demo:

#include <stdint.h>
#include <stdio.h>

uint32_t makeRGB30(float r, float g, float b)
{
  const uint32_t mask = (1u << 10u) - 1u;
  /* convert float -> uint */
  uint32_t rU = r * mask, gU = g * mask, bU = b * mask;
  /* combine and return color components */
  return ((rU & mask) << 20) | ((gU & mask) << 10) | (bU & mask);
}

int main(void)
{
  /* samples */
  const float colors[][3] = {
    { 0.0f, 0.0f, 0.0f }, /* black */
    { 1.0f, 0.0f, 0.0f }, /* red */
    { 0.0f, 1.0f, 0.0f }, /* green */
    { 0.0f, 0.0f, 1.0f }, /* blue */
    { 1.0f, 1.0f, 0.0f }, /* yellow */
    { 1.0f, 0.0f, 1.0f }, /* magenta */
    { 0.0f, 1.0f, 1.0f }, /* cyan */
    { 1.0f, 1.0f, 1.0f } /* white */
  };
  const size_t n = sizeof colors / sizeof *colors;
  for (size_t i = 0; i < n; ++i) {
    float *color = colors[i];
    uint32_t rgb = makeRGB30(color[0], color[1], color[2]);
    printf("(%f, %f, %f): %08x\n", color[0], color[1], color[2], rgb);
  }
  /* done */
  return 0;
}

Output:

(0.000000, 0.000000, 0.000000): 00000000
(1.000000, 0.000000, 0.000000): 3ff00000
(0.000000, 1.000000, 0.000000): 000ffc00
(0.000000, 0.000000, 1.000000): 000003ff
(1.000000, 1.000000, 0.000000): 3ffffc00
(1.000000, 0.000000, 1.000000): 3ff003ff
(0.000000, 1.000000, 1.000000): 000fffff
(1.000000, 1.000000, 1.000000): 3fffffff

Live Demo on ideone

Scheff's Cat
  • 19,528
  • 6
  • 28
  • 56
  • You are assuming too much: that tge unused bits are the most significant ones, then there are red, green, and blue components in that order. This is actually all hardware dependent (and has *nothing to do whatsoever* with endianness). X11 has APIs to determine the masks and tge shifts needed to compose a pixel out of separate channels. – n. m. could be an AI Nov 04 '18 at 11:14
  • @n.m. Isn't that what I mentioned above of endianess? – Scheff's Cat Nov 04 '18 at 11:15
  • Again, this has nothing to do with endianness. Endianness is about the order of *bytes* in a word. You are not dealing with bytes at all, but with 32-bit bitmasks only. You are not dealing with bytes even if your bitmasks happen to be all 8-bit wide and start at multiples of 8. A byte is an *addressable* unit. Unless you have e.g. `char*` and `int*` pointing into the *same* word, endianness is utterly irrelevant. – n. m. could be an AI Nov 04 '18 at 11:19
  • @n.m. I'm not that familiar with H/W issues like that. So, can I be sure that endianess of graphics H/W will always be the same than that of CPU? OP mentioned to pass color values as bytes (i.e. `uint8_t`). Hence, I thought endianess could be worth to mention. – Scheff's Cat Nov 04 '18 at 11:21
  • Classical X11 is a client-server system that employs a wire protocol. The protocol can use either big-endian or little-endian format depending on what the client selects at connection start. If the server has different endianness, it will take care of reversing the byte order. Modern direct rendering APIs may require the client to reverse the bytes (I'm less familiar with this part of X11). Still this has nothing to do with the correct order of some 2- and 10-bit wide masks. – n. m. could be an AI Nov 04 '18 at 11:37
  • @n.m. _Still this has nothing to do with the correct order of some 2- and 10-bit wide masks._ Now, I get it. Sorry, where did I say it has something to do with? I made two points with two "orthogonal" issues (at least I intended to write so.) (Either I should improve my wording or you got me wrong.) However, I agree with you concerning X11 (though I never went deeper than programming it). – Scheff's Cat Nov 04 '18 at 11:42
  • I think I got you wrong at least partially. Now that I'm near a larger screen I see that you separate the RGBa/BGRa/aRGB issue from the endianness issue, which is absolutely correct. Still, the former is better not determined by experimentation. – n. m. could be an AI Nov 04 '18 at 12:28
  • @n.m. Glad, we could bury the hatchet. Experimenting with RGB, when potential gamma issues are questionable also can become very tedious. In my case, I once tried to fill a `QImage`. Although, I've read the doc. forwards and backwards, it finally looked like a color negative. Swapping the RGBs a bit brought the expected result soon. (Though, as already mentioned, "nothing I'm proud of".) – Scheff's Cat Nov 04 '18 at 12:33
1

The exact way how the color channels are arranged depends on the API. It may very well be planar (i.e. one mono image per channel) it may be packed (i.e. packing several channels into a single word of data), it may be interleaved (using different representations for each channel).

However one thing is for sure: For any channel format that doesn't exactly fit a "native" type, some bit twiddling will have to happen to access it.

To get an idea of how vast that field is, just look at the image formats specified by the very first version of the Vulkan API: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch31s03.html – that document also describes, how exactly the bits are arranged for each format.

datenwolf
  • 159,371
  • 13
  • 185
  • 298