0

I have a question considering Geoffrey Hinton's proof of convergence of the perceptron algorithm: Lecture Slides.

On slide 23 it says:

Every time the perceptron makes a mistake, the squared distance to all of these generously feasible weight vectors is always decreased by at least the squared length of the update vector.

My problem is that I can make the distance reduction arbitrarily small by moving the feasible vector to the right. See here for a depiction:

vector diagram

So how can the distance be guaranteed to shrink by the squared length of the update vector (in blue), if I can make it arbitrarily small?

user812786
  • 4,302
  • 5
  • 38
  • 50
R. Downey
  • 1
  • 1

2 Answers2

0

If I'm reading his proof correctly, there are two reasons:

  1. This concerns the set of feasible vectors, not just one.
  2. The reference is to the sum of the squared distances to the individual vectors. Note that the update moves the new point farther from the brown dot (another feasible vector).
  3. Moving one vector will change the update vector.
Prune
  • 76,765
  • 14
  • 60
  • 81
-1

The proof states that the "squared distance" a^2 + b^2, not straight-line distances (Euclidean distance) which would cause issues. Since we update the "bad" weight vector 'vertically' while still keeping the same 'horizontal' distance, we are always guaranteed to get closer to the generously feasible vector by at least the squared length of the update vector. I believe this should generalize to more dimensions. Please correct me if I am wrong.