I'm researching on the perceptron Algorithm in machine learning.Till now,I understood the following things about the perceptron
1)It's a supervised learning technique
2)It tries to create a hyper plane that linearly separates the class
labels ,which is when the perceptron converges
3)if the predicted output and the obtained output from the algorithm
doesnot match it adjusts it's weight vector and bias.
However,I couldnot understand what happens to the weight vector if the
perceptron doesnot acheive convergence? Do the algorithm keeps on
updating the weight vector?