-1

How to apply Color/hue/saturation blends as Photoshop? I know, that GIMP works with HSV/HSL color models, but photoshop - with CIE LCH. hue/color/saturation blends has differences for the HSV/HSL and LCH color spaces. Original question ( How does photoshop blend two images together? ) works with HSV/HSL color models.

LCH color model

HSV color model

I want the LCH color model. You can see algorithm RGB to LCH (RGB-XYZ-LAB-LCH): http://dev.vkdev.pro/2013/01/lch-color-model-photoshop-blend-modes.html

As you can see, the algorithm contains a lot of multiplication, division, sqrt, atan2, sin, cos, pow... And this algorithm works very very slow on Android devices.

How can I do the conversion from RGB to LCH quickly?

Community
  • 1
  • 1
monyag
  • 800
  • 3
  • 9
  • 27

2 Answers2

0

There are a few things you can do to improve the speed of the conversions. The gamma corrections can be very closely approximated by just doing a sqrt() and square. Because of the linear segment offset in the correct formula, it's very close, and I happen to know of several professional applications that use this shortcut. On a device which doesn't support color calibration, like a phone, it will be more than close enough.

In Rgb2Xyz(), the multiplies and adds are just a matrix multiply. If you're doing the conversion on the GPU (which you should be), this would be a single line of glsl code.

In fact, working on the GPU would make most of these functions into far fewer lines of code because it's a highly parallel vector processor. So the 3 divides in Xyz2Lab() become a single line of glsl. You can probably make the 3 subtract and multiplies into 1 or 2 lines of code, as well. And it has functions for sqrt(), atan2(), sin(), cos(), pow(), etc.

See here for the docs on glsl.

user1118321
  • 25,567
  • 4
  • 55
  • 86
  • I'm new to this. How should look like porting to OpenGL? Can I leave the code unchanged, and specify that you want to use the GPU? Or should all be rewritten using methods glsomemethod? My code looks like this: http://www.ibm.com/developerworks/opensource/tutorials/os-androidndk/section5.html – monyag Jan 19 '13 at 16:08
  • It would involve setting up an OpenGL context and drawing a single textured quad with your fragment shader applied. It's a little too involved to describe in a comment, but there are many tutorials on it. [This one](http://www.benve.org/Download/Introduction%20to%20GLSL.pdf) looks like it's pretty decent. You might want to start with a basic OpenGL tutorial for just drawing objects first, then move on to the glsl tutorials. – user1118321 Jan 20 '13 at 01:12
0

I know this is an old post, but I like to answer these things as best I can when I have the opportunity, just so the next guy has better help than I had....

In my own crusade to implement the same blending in a different environment, I came across an issue with the naive math that's offered in a lot of suggestions. Unless you're using the LCH space for other image operations, it may not be worth the cost of conversion just for this. You're right; there's lots of math involved in the conversion and if you're going to convert right back to RGB, you're going to need to handle the out-of-gamut points gracefully. As far as I know, this means doing data truncation prior to conversion back to RGB. Truncating the negative and supramaximal pixel values after conversion will wash out black and white regions of the image. The problem is that truncating prior to conversion means trying to calculate the maximum chroma for a given H and L. You can precalculate and use a LUT, but it's not that much faster.

If you want a color model which offers good (better than HSL) color/brightness separation and behaves reasonably similarly (for noncritical purposes), I suggest using YPbPr instead. It's a single affine transformation, so it'll be much faster, and the bounding calculations are simple, as the faces of the RGB gamut remain planar. The bounding calculations are just line-plane intersection maths (and even these can be rolled into a LUT if you want more speed. Again, it's not as good as a LAB or LUV blend; if the foreground is highly saturated, you may notice as much, but if speed and simplicity is important it might be worth considering.

Something that might be worth researching as an example is HuSL (www.husl-colors.org). HuSL is a version of CIELUV wherein the chroma space is normalized to the aforementioned boundary of the RGB gamut. The WMaxima worksheet available there should give you some ideas as to how to calculate the maximal chroma for truncation. While using HuSL for color blends works better than HSL, the normalization distorts the representation of chroma (S), so the uniformity of LUV (the reason we went through all those fractional exponentiations) is lost. The implementations here are also based on CIELUV, not CIELAB, though the consequences of that probably don't matter much unless you're doing image editing.

Similarly, if examples are of any help, I have an image blending function on the Mathworks FEX which offers these blends, as well as the HuSL and bound LCHab/uv conversion tools (as well as the max chroma LUT's for LCHab and LCHuv) It's a matlab script, but the mathematical concept is the same.

If you don't want to deal with bounding chroma calculations at all, you might consider doing this:

  1. Extract the luma from the image. Just do a weighted average using whatever definition of luma you want. [0.299 0.587 0.114]*[R G B]' is Rec.601 iirc.
  2. Do a HS channel swap in HSL (GIMP uses HSL not HSV, not HSI). Convert back to RGB
  3. Convert to your favorite luma-chroma model (e.g. YPbPr) and replace Y' with the original luma. Convert back to RGB.

This is basically the programmatic approach to a common luma preservation technique using layers in an image editor. This produces surprisingly good results and uses common conversion tools.

Finally, I offer my own discussion of this with example blends using various color models: HSV, HSI, HSL, CIELAB (with truncations before and after conversion), and the HSL+Y method. Also: if you're as thick as I am, having a visualization of the geometry of the projected RGB space might help lend a better understanding of what goes on when you swap chroma/lightness information between points in LCH (and the necessity of managing OOG color points)

I'd post links to more info, but I guess they don't allow nobodies like me to do that.

Edit: Actually, I just realized something. If speed and uniformity are important, and a slightly limited chroma range is acceptable (maybe for UI graphics or something), an addendum to my YPbPr recommendation may be in order. Consider the extent of the RGB gamut in YPbPr: a skewed cube standing on its corner about the neutral axis. The maximal rotationally-symmetric subset of this volume is a bicone. Truncating to this bicone instead of to the cube faces is much faster than calculating intersections with the cube faces. The blend is more uniform, but ultimately has a limited maximum chroma. There are HuSL methods that operate on the same principle (HuSLp), but doing this in YPbPr allows access to a greater fraction of the RGB space than HuSLp does. There's actually a post about doing this on my blog (with examples comparing it to LAB blending), though at the time, I was more concerned with uniformity. The speed of the cone intersection calculation didn't cross my mind until now.

DGM
  • 21
  • 3