5

One thing which occurred to me today was that OpenGL specifies the GL_SHININESS material component to be between 0.0 and 128.0, instead of between 0.0 and 1.0.

As far as I know, everything else which you can specify using floating point values has a range of 0.0 to 1.0. What is the reason for the difference with GL_SHININESS?

Edits:

See also my notes in a new answer posted below.

The range 0.0 to 1.0 doesn't make any sense, because of the way in which the lighting model is constructed. The shininess coefficient is used as an exponent. Raising x^y for y in the range [0.0, 1.0] doesn't make much sense in a lighting model because for x in the range [-1.0, +1.0] (which it is because it comes out of the back of a trig cosine function) you can't more tightly constrain the trigonometric distribution, you can only less tightly constrain it.

Good references which explain this better than I have done here include:

  • Interactive Computer Graphics - A Top Down Approach With Shader Based OpenGL by Angel and Schreiner

  • The OpenGL Programming Guide, Version 4.3

The answers below are also quite informative.

I think when I asked this question some years ago I probably subsequently asked "Why isn't the value in the range 0.0 to 1.0, and this value then mapped onto a new value for the exponent?" This is another good question, presumably this would be an exponential map: y = exp(kx), x in [0.0, 1.0]. The answer is "because it isn't designed that way". While this might be more useful from a "black box" point of view, it's not very useful to a scientist who has measured the exponent in some form of experiment in the hope of implementing realistic material models.

Community
  • 1
  • 1
FreelanceConsultant
  • 13,167
  • 27
  • 115
  • 225

3 Answers3

9

In most simple lighting models the brightness of a specular highlight is calculated as cos(theta) ^ n where theta is the angle of incidence and n is the shininess value.

GL_SHININESS is that exponent. The higher the value, the "tighter" the highlight will be.

IMHO, for people aware of the formula it's simply more intuitive to use the value as given, rather than attempt to use a "normalised" value in the range 0..1

Alnitak
  • 334,560
  • 70
  • 407
  • 495
4

Because shininess is an exponent/power, having only values between [0.0, 1.0] would mean only n-th roots are allowed and not higher powers like square, cube, etc.

legends2k
  • 31,634
  • 25
  • 118
  • 222
  • 1
    While this (and the other answer) is of course correct, the question still stands why this is the arbitrarily seeming value of `128`. Maybe it's from the days when this value was actually stored/used as integer in the implementation, so they took a reasonable limit to allow usage of a single byte for storage and computation. – Christian Rau Jul 18 '13 at 14:20
  • Surely then, the input value should be between n = [0.0, 1.0] and the exponent calculated: exponent = exp(x); x = 1.0 / (1.0 - n); Guess they didn't think of that in 90's? – FreelanceConsultant Jul 18 '13 at 14:23
  • @ChristianRau I think it's simply clearer to use the value as it's used in the formula. I don't know why it would be capped at 128, although values in that range would already produce exceptionally tight highlights. I've never seen a lighting model that required `n` to be an integer, BTW. – Alnitak Jul 18 '13 at 14:25
  • @Alnitak *"I think it's simply clearer to use the value as it's used in the formula."* - True, didn't argue about this. *"I've never seen a lighting model that required n to be an integer"* - It's not about the model, it's about the hardware this *was* implemented on. I'd guess an integer exponent might be much more friendly for this (and I myself couldn't distinguish a shininess of `48` from `47.5` anyway). – Christian Rau Jul 18 '13 at 14:30
  • @ChristianRau indeed, but you could probably tell the difference between 1.5 and 2 (and by model, I meant implementation) – Alnitak Jul 18 '13 at 14:33
  • @Alnitak True, yet that may be regarded as an inherent inaccuracy of the implementation. – Christian Rau Jul 18 '13 at 14:35
  • @ChristianRau +1 yes; it seems this artefact is due to the legacy nature of the API; however, a byte maps to [0, 255] while this seems to cap at the 7-bit max value of 128 like that of the older ASCII code. – legends2k Jul 18 '13 at 14:35
  • 1
    @legends2k Even weirder, it isn't the 127 of a singed byte, but 1 larger, requiring an 8-bits unsigned for all reasonable representations of integers anyway, so they could just have taken 255. In the end it might have just been an arbitrary decision, like *"100 is a large enough value, but we all know powers of two are nice"*. ;) – Christian Rau Jul 18 '13 at 14:37
  • I personally think that any linkage between the current range and a supposed "legacy" value due to hardware limitations is just a red herring. – Alnitak Jul 18 '13 at 14:41
  • @Alnitak Might be, yet that value has to come from somewhere, and it is from a time where such considerations could very well have mattered (and the *"current range"* is the legacy range, it hasn't ever changed). Nowadays there isn't any hardware for specular lighting anyway, it's a CUDA kernel like every other custom shader, but nobody cares for adapting that value in the specification either, given that nobody uses it anymore nowadays anway (any nobody ever needed a higher value either). – Christian Rau Jul 18 '13 at 14:43
2

Adding my own comments to those previously made:

The OpenGL shininess coefficient is used in an exponent in a lighting model where the specular reflection coefficient is computed by cos^alpha(beta), where alpha = GL_SHININESS coefficient and beta is an angle between the reflection normal direction and the direction to the camera (in the OpenGL camera model interpretation of modelview matrix mathematics).

Values larger than 1.0 give a more "tightly constrained distribution" or cone of reflected light. Why exactly 128.0 is the maximum value? Probably because it's a value which works well. There won't be that much difference in the observable effect when using numbers of this order of magnitude in comparison to numbers of order of magnitude 1.0. I guess the OpenGL standards committee could have chosen 1024.0 or 64.0 (etc), but the difference wouldn't be that significant.

FreelanceConsultant
  • 13,167
  • 27
  • 115
  • 225