My first Stack Exchange question regarding numeric stability in geometry. I wrote this function (below) for computing a signed angle from one vector to the other, inspired by the idea presented here, to which I am indebted.
In the function, the triple product is used for getting from the raw acute angle that you get on the first part, to the directed angle opening up from the first vector to the second.
import numpy as np
from numpy import ndarray, sign
from math import pi
Vec3D = ndarray
def directed_angle(
from_vec: Vec3D,
to_vec: Vec3D,
normal: Vec3D,
debug=False):
""" returns the [0, 2) directed angle opening up from the first to the second vector,
given a 3rd vector linearly independent of the first two vectors, which is used to provide
the directionality of the acute (regular, i.e. undirected) angle firstly computed """
assert np.any(from_vec) and np.any(to_vec), \
f'from_vec, to_vec, and normal must not be zero vectors, but from_vec: {from_vec}, to_vec: {to_vec}'
magnitudes = np.linalg.norm(from_vec) * np.linalg.norm(to_vec)
dot_product = np.matmul(from_vec, to_vec)
angle = np.arccos(np.clip(dot_product / magnitudes, -1, 1))
# the triplet product reflects the sign of the angle opening up from the first vector to the second,
# based on the input normal value providing the directionality which is otherwise totally abscent
triple_product = np.linalg.det(np.array([from_vec, to_vec, normal]))
if triple_product < 0:
result_angle = angle
else:
result_angle = 2*pi - angle
# flip the output range from (0, 2] to [0, 2) which is the prefernce for our semantics
if result_angle == 2*pi:
result_angle = 0
if debug:
print(f'\nfrom_vec: {from_vec}, '
f'\nto_vec: {to_vec}'
f'\nnormal: {normal}, '
f'\nundirected angle: {angle},'
f'\nundirected angle/pi: {angle/pi}, '
f'\ntriple product: {triple_product}'
f'\ntriple product sign: {sign(triple_product)}'
f'\nresult_angle/pi: {result_angle/pi}')
return result_angle
def test():
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
vecs = dict(
same_direction = np.array([1, 0, 0]),
opposite_direction = np.array([-1, 0, 0]),
close_to_same_direction_from_side_1 = np.array([1, 0.00000001, 0]),
close_to_same_direction_from_side_2 = np.array([1, -0.00000001, 0]))
for desc, v in vecs.items():
print(f'\n{desc}:')
directed_angle(v0, v, normal, debug=True)
When as in the example test code above, the given second vector's angle is extremely close to the first given vector's angle from either side of it, instead of getting the resulting angle respectively very close to 0 and very close to 2, the function's result is actually zero, so these two semantically different vector directions yield the same output angle.
How would you approach making this function numerically stable in that sense?
Assume the directed angle grows from 0 up to numbers approaching 2, how would you avoid the output angle of the function jumping to 0 as it infinitesimally approaches the full circle directed angle? as long as its output is infinitesimally imprecise we are fine, but if it goes from close to 2 to 0 as it infinitesimally approaches 2 ― this kind of discontinuous value jump will be devastating for the interpretation of its output in my application as well as I guess in many other cases.
Obviously we can empirically find upper-bound boundaries for where the function output for angles approaching 2 collapses to 0 ― a function for that is actually attached below ― but this doesn't help at all when an input is the vector which yields that angle (we cannot tell from the vector that it is going to yield a flip to 0, so when we have 0 as the resulting angle of the function ― we cannot tell if the real angle was 0 or rather a value approaching 2).
I guess you can replace the term directed angle with any of the terms clockwise (or counter-clockwise) angle.
diagnostic functions follow.
def test_monotonicity(debug=False, interpolation_interval=1e-4):
""" test that the function is monotonic (up to the used interpolation interval),
and correct, across the interval of its domain ([0, 2)) """
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
# range from 0 to 2 at 0.1 intervals
angles = np.arange(0, 2*pi, step=interpolation_interval)
prev_angle = None
for angle in angles:
# make a vector representing the current angle away from v0, such that it goes clockwise
# away from v0 as `angle` increases (clockwise assuming you look at the two vectors from
# the perspective of the normal vector, otherwise directionality is void)
vec = np.array([np.cos(angle), -np.sin(angle), 0])
result_angle = directed_angle(v0, vec, normal)
if debug: print(f'angle/pi: {angle/pi}, computed angle/pi: {result_angle/pi}')
if prev_angle is None:
assert angle == 0
else:
assert angle > prev_angle
drift = result_angle - angle
if angle == result_angle:
pass
else:
print(f'angle/pi: {angle/pi}, result_angle/pi: {result_angle/pi}, drift/pi: {drift/pi}')
assert isclose(result_angle, angle, rel_tol=1e-6)
prev_angle = angle
def test_demonstrating_the_concern():
""" demonstration of sufficiently small angles from both sides of the reference vector (v0)
collapsing to zero, which is a problem only when the angle should be close to 2 pi """
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
vecs = dict(
same_direction = np.array([1, 0, 0]),
opposite_direction = np.array([-1, 0, 0]),
close_to_same_direction_from_side_1 = np.array([1, +0.00000001, 0]),
close_to_same_direction_from_side_2 = np.array([1, -0.00000001, 0]))
expected_results = [0, 1*pi, None, None]
for (desc, v), expected_result in zip(vecs.items(), expected_results):
print(f'\n{desc}:')
v = v / np.linalg.norm(v)
result = directed_angle(v0, v, normal, debug=True)
if expected_result is not None:
assert(result == expected_result)
def test_angle_approaching_2pi(epsilon=1e-7, interpolation_interval=1e-10, verbose=True):
""" stress test the angle values approaching 2pi where the result flips to zero """
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
# range from 2-epsilon to close to 2
angles = np.arange(2*pi-epsilon, 2*pi, step=interpolation_interval)
prev_angle = None
for angle in angles:
# vector representing the current angle away from v0, clockwise
vec = np.array([np.cos(angle), -np.sin(angle), 0])
result_angle = directed_angle(v0, vec, normal)
if prev_angle is None:
pass
else:
assert angle > prev_angle
drift = result_angle - angle
if angle == result_angle:
pass
else:
if verbose: print(f'angle/pi: {angle / pi}, result_angle/pi: {result_angle / pi}, drift/pi: {drift / pi}')
if result_angle == 0:
print(f'function angle output hit 0 at angle: {angle};'
f'\nangle/pi: {angle / pi}'
f'\nangle distance from 2: {2*pi - angle}')
break
else:
assert isclose(result_angle, angle, rel_tol=1e-6)
prev_angle = angle
And all that said, I'm happy to also have insights about possible other numerical stability issues which may arise in this function arising in the calculation steps that it is making. I have seen comments about preferring arctan to arccos and variations, but wouldn't know whether they fully apply to numpy without introducing other stability drawbacks.
Also thankful for comments to any general methodology for thinking numerical stability or guidelines specific to numpy for safely writing numerically stable computation in the realm of 3D geometry over arbitrary inputs; although I come to realize that different applications will worry about different ranges where inaccuracies become detrimental to their specific logic.
As my inputs to this function come in from machine learning predictions, I can't avoid extreme vector values even though their probability is low at any single invocation of the function.
Thanks!