8

I am having some trouble finding parallel vectors because of floating point precision. How can I determine if the vectors are parallel with some tolerance?

I also need a check for orthogonality with tolerance.

Josh C.
  • 4,303
  • 5
  • 30
  • 51

3 Answers3

15

For vectors v1 and v2 check if they are orthogonal by

abs(scalar_product(v1,v2)/(length(v1)*length(v2))) < epsilon

where epsilon is small enough. Analoguously you can use

scalar_product(v1,v2)/(length(v1)*length(v2)) > 1 - epsilon

for parallelity test and

scalar_product(v1,v2)/(length(v1)*length(v2)) < -1 + epsilon

for anti-parallelity.

Howard
  • 38,639
  • 9
  • 64
  • 83
  • That's right, and to check being parallel, you'd check to see if the left-hand-side expression was 1.0, rather than 0.0. That is you'd see if the difference from 1.0 was less than epsilon. In general you're comparing that expression with the cosine of the desired angle, since that expression is the cosine of the angle between the vectors. – Sean Owen Sep 27 '11 at 16:45
  • To check for parallelity using scalar_product(v1,v2)/(length(v1)*length(v2)) > 1 - epsilon, should I take the abs of the left hand side? – Josh C. Sep 27 '11 at 16:55
  • @JoshC. It depends. If you take the absolute value also vectors pointing exactly opposite will be considered parallel. Then instead you can also write `abs(1-scalar_product/lengths) – Howard Sep 27 '11 at 16:56
  • @Howard How can I know if two line segments are near collinear? – Josh C. Apr 10 '12 at 22:39
  • @Howard: Isn't taking a dot product enough for checking if they're orthogonal? Also for parallelity case, ||v1 x v2|| > 0 is enough. The divide is unnecessary in both cases. – legends2k May 25 '13 at 12:44
  • @legends2k absolutely negative, what you need is cos(theta) or sin(theta) not norm(v)*norm(w)*cos(theta) or norm(v)*norm(w)*sin(theta) that's why we eliminate norm(v)*norm(w) by dividing, also since definition of parallelity is vectors being linearly dependent two vectors pointing in opposite directions are also considered parallel – Pooria Aug 12 '14 at 10:41
  • @Pooria I think you don't understand the optimization here; say if two vectors are linearly dependant, what as the possible angles between them 0, π, 2π, etc. sin (nπ) = 0 : ∀ n ∈ ℤ, thus ‖v1 ⨯ v2‖ = 0 * ‖v1‖ * ‖v2‖ = 0, so there's no need to actually divide by ‖v1‖ * ‖v2‖ to find the actual sine value. A similar argument holds for orthogonality too. – legends2k Aug 12 '14 at 14:37
  • @legends2k mathematically correct but when dealing with floating point computation though due to round-off errors incorrect – Pooria Aug 17 '14 at 01:04
3

If you have 3D vectors the answer is simple. Compute the cross product and if it is nearly zero, your vectors are nearly parallel: http://mathworld.wolfram.com/ParallelVectors.html

For 2d vectors you can convert them into 3D vectors just by adding a coordinate with zero (1;2) => (1;2;0), (4; 5.6) => (4; 5.6; 0) and so on

Two vectors are orthogonal or perpendicular, if there dot product ist zero: http://mathworld.wolfram.com/CrossProduct.html

-edit http://mathworld.wolfram.com/Perpendicular.html

yunzen
  • 32,854
  • 11
  • 73
  • 106
0

If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy and it supports single values and stacked vectors.

import numpy as np
import vg

v1 = np.array([1.0, 2.0, 3.0])
v2 = np.array([-2.0, -4.0, -6.0])

vg.almost_collinear(v1, v2)
# True

I created the library at my last startup, where it was motivated by uses like this: simple ideas which are verbose or opaque in NumPy.

paulmelnikow
  • 16,895
  • 8
  • 63
  • 114