0

We are currently using GL_SRGB8_ALPHA8 for FBO color correction but it causes significant color banding in darker scenes.

Is there a version of GL_SRGB8_ALPHA8 that has 10 bit per color channel (e.g. GL_RGB10_A2)? If not, what workarounds are there for this use case?

The attached image has been added contrast to make it more visible but it's still noticeable in the source as well.

enter image description here

Anton Duzenko
  • 2,366
  • 1
  • 21
  • 26
  • It depends on what you are doing (and you didn't specify in the question). For 10bit, you need a 10bit video card. For video rendering, floating points are used, but rendering not in real-time, or not made in your own computer. – Giacomo Catenazzi Sep 23 '19 at 14:30
  • How are you using it? You say "for FBO colour correction". Does that mean you've already rendered the scene to a normal 8-bit texture and now you're doing the colour correction as a separate step? – user253751 Oct 02 '19 at 16:03
  • @immibis I'm interested in a discussion but not sure if question comments is a good place for this. AFAIU the color banding problem is not directly related to SRGB. It's something we have to suffer on all 8-bit/component displays? Even if we have internal precision of 16bits in FBO it will be lost when transferring the picture to physical monitor. – Anton Duzenko Oct 03 '19 at 07:06
  • @AntonDuzenko I ask because I *also* tried doing color-correction by starting from an 8-bit linear buffer, then converting it to sRGB. An 8-bit linear buffer simply doesn't have enough precision for the darker colours, compared to sRGB, and if you're doing it this way, banding is unavoidable unless you either render directly to sRGB, or increase the bit depth, or use a floating-point linear buffer. – user253751 Oct 04 '19 at 07:33
  • @immibis Correct me if I'm wrong here. Let's agree that SRGB FBO can do better precision for darker colors. Let's say linear color space allows grades of 0, 1, 2, 3...255. Let's say a non-linear color space can do e.g. 0, 0.3, 0.6, 0.9, etc. That's all good and working. But in the very end I need to present this high-precision frame picture to DWM/Monitor. Am I wrong that this precision is lost when I convert by great internal framebuffer to color buffer that's actually being sent to video card output? Doesn't e.g. HDMI output deal in linear space? (Not related to original question) – Anton Duzenko Oct 04 '19 at 10:50
  • The output to your monitor is sRGB. The problem is when you convert from linear to sRGB, you get colour banding because linear 0 is sRGB 0, and linear 1 is sRGB 10 (let's say), so there is nothing in between those. There's no way to get an sRGB 1, 2, 3, ... or 9 (assuming that 1 maps to 10). If this is the problem, then the colour banding isn't because of the sRGB conversion - the colour banding happened when you rendered the picture. – user253751 Oct 04 '19 at 13:59
  • @immibis This discussion is not related to original question, so never mind it. I need to understand the basics first. You say output to my monitor is sRGB. Why do I need to convert to linear at all? Why not render in sRGB FBO and keep it all the way while sending to Monitor in that color space? – Anton Duzenko Oct 05 '19 at 06:42
  • Usually people render in linear space because they get better results when you combine multiple textures together. The alpha compositing equation only works in linear space. [See Wikipedia's image examples](https://en.wikipedia.org/wiki/Alpha_compositing#Composing_alpha_blending_with_gamma_correction). The image "without gamma correction" (i.e. done in sRGB) has darker patches where two colours are blended. The version "with gamma correction" (i.e. in linear space) is a consistent brightness. This is one of the "secret tips" you need to know to make your graphics look good, IMO. – user253751 Oct 06 '19 at 17:06
  • I still think that we need more details about what you are doing in order to answer the question accurately. – user253751 Oct 06 '19 at 17:08
  • @immibis I'm not really doing anything yet - just experimenting with how I can reduce color banding. Does it have anything to do with sRGB? If not, what is it used for? You seem to call linear space 'gamma-corrected', huh? – Anton Duzenko Oct 08 '19 at 07:56

2 Answers2

1

On the surface Direct3D 9 seems to support this because it doesn't encode sRGB into formats. Instead it sets the sampler to decode to linear space and so I can't see how it doesn't work properly on D3D9 or else the textures would be filtered incorrectly. I don't know how it implements it otherwise. Even with GL_EXT_texture_sRGB_decode it was decided (there's a note) not to enable this D3D9 like behavior.

As usual OpenGL seems to always be chasing the ghost of this now old API. It could just have something like D3DSAMP_SRGBTEXTURE and it would have parity with it. Presumably the hardware implements it. Any banding could depend on the monitor since ultimately it has to be down-converted to the monitor's color depth which is probably much lower than 10 bits.

Mick P.
  • 9
  • 2
0

In the end we ended up with linear 16-bit floating-point format I suspect drivers use it internally for SRGB anyway At any rate, as @mick-p noted, we're always limited to display color depth

Anton Duzenko
  • 2,366
  • 1
  • 21
  • 26