-2

I searched to learn Backpropagation algorithm with adaptive learning rate, and find a lot of resources but it was hard for me to understand, because I'm new in neural network. I know how standard backpropagation algorithm works, very well. Is anybody here to explain me how these two algorithms are different from each other?

mkj
  • 2,761
  • 5
  • 24
  • 28
starrr
  • 1,013
  • 1
  • 17
  • 48
  • "I want to write a matlab program to train a neural network " - what's stopping you? And what is this?: http://stackoverflow.com/questions/19939909/performance-in-backpropagation-algorithm – Mitch Wheat Nov 12 '13 at 23:21

1 Answers1

3

I think the core difference is the update function, as you could see from here

For classic EBP

w(k+1) <- w(k) - a * gradient

For adaptive learning:

w(k+1) <- w(k) - eta * gradient

where:

eta = 

    (w(k) - w(k-1)) / (gradient(k) - gradient(k-1))    if eta < etamax
    etamax                                             otherwise

So you only need to change the weight update function part. The above is just a simplified version, for implementation, you would have to adjust eta according to the error(k) and error(k-1). And there are many ways to do that.

The basic idea of adaptive is that

  1. if you get a smaller error, you want to try increasing learning rate
  2. if you get a larger error, you want to decrease learning rate to that it converges
gongzhitaao
  • 6,566
  • 3
  • 36
  • 44