So I've got some code that's intended to generate a Linear Gradient between two input colors:
struct color {
float r, g, b, a;
}
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = c1.r + (c2.r - c1.r) * ratio;
output_color.g = c1.g + (c2.g - c1.g) * ratio;
output_color.b = c1.b + (c2.b - c1.b) * ratio;
output_color.a = c1.a + (c2.a - c1.a) * ratio;
return output_color;
}
I've also written (semantically identical) code into my shaders as well.
The problem is that using this kind of code produces "dark bands" in the middle where the colors meet, due to the quirks of how brightness translates between a computer screen and the raw data used to represent those pixels.
So the questions I have are:
- Do I need to correct for gamma in the host function, the device function, both, or neither?
- What's the best way to correct the function to properly handle gamma? Does the code I'm providing below convert the colors in a way that is appropriate?
Code:
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = pow(pow(c1.r,2.2) + (pow(c2.r,2.2) - pow(c1.r,2.2)) * ratio, 1/2.2);
output_color.g = pow(pow(c1.g,2.2) + (pow(c2.g,2.2) - pow(c1.g,2.2)) * ratio, 1/2.2);
output_color.b = pow(pow(c1.b,2.2) + (pow(c2.b,2.2) - pow(c1.b,2.2)) * ratio, 1/2.2);
output_color.a = pow(pow(c1.a,2.2) + (pow(c2.a,2.2) - pow(c1.a,2.2)) * ratio, 1/2.2);
return output_color;
}
EDIT: For reference, here's a post that is related to this issue, for the purposes of explaining what the "bug" looks like in practice: https://graphicdesign.stackexchange.com/questions/64890/in-gimp-how-do-i-get-the-smudge-blur-tools-to-work-properly