9

I'm looking to adapt the 3D Perlin noise algorithm to lower dimensions, but I'm having trouble with the gradient function, since I don't fully understand the reasoning.

The original Perlin gradient function takes four arguments: a hash and a three-dimensional coordinate (x, y, z). The result of the function is returned based on the value of hash mod 16, as listed below.

  • 0: x + y
  • 1: -x + y
  • 2: x - y
  • 3: -x - y
  • 4: x + z
  • 5: -x + z
  • 6: x - z
  • 7: -x - z
  • 8: y + z
  • 9: -y + z
  • 10: y - z
  • 11: -y - z
  • 12: y + x
  • 13: -y + z
  • 14: y - x
  • 15: -y - z

The return values from 0 to 11 make a kind of pattern, since every combination is represented once. The last four, however, are duplicates. Why were they chosen to fit the last four return values? And what would be the analagous cases with two (x, y) and one (x) dimensions?

Matthew Piziak
  • 3,430
  • 4
  • 35
  • 49
  • I still don't understand the purpose of this gradient function, even for 2D. I just use the dot product things on the 4 near vectors, I don't see what this gradient is for. – jokoon Jun 30 '19 at 17:07
  • @jokoon I was playing with a terrain generation seven years ago and this function was close to hand. Not sure what you mean by "dot product things". – Matthew Piziak Jun 30 '19 at 21:39
  • 1
    I'm late, but they're one and the same thing, you calculate dot product of pseudorandom gradient vector and yours. [The paper Perlin wrote explains it](https://web.archive.org/web/20200618040237/https://mrl.nyu.edu/~perlin/paper445.pdf). The actual link is not dead, but I link directly to archive just in case. The optimized gradient function obscures that fact because you'd multiply by -1/+1/0 and that's pointless, you simply negate parts of your original vector where needed and omit ones that would be multiplied by 0's. Writing it this way also removes the need to store those 12 vectors. – Yamirui Sep 04 '20 at 09:42
  • Thank you Yamirui! This seems to be the true heart of it. Please feel free to post this comment as an answer. – Matthew Piziak Sep 06 '20 at 19:32

1 Answers1

13

... is late answer better than none? ;-)

The grad function in the "improved noise" implementation calculates a dot product between the vector x, y, z and a pseudo random gradient vector.

In this implementation, the gradient vector is selected from 12 options. They drop uniformity of the selection and add numbers 12 to 15, because it is faster to do hash & 15 than hash % 12

For a 2D perlin noise I have used only 4 gradient vectors without any visible problems like this:

return ((hash & 1) ? x : -x) + ((hash & 2) ? y : -y);
cube
  • 3,867
  • 7
  • 32
  • 52