0

I watched a lecture and derived equations for back propagation, but it was in a simple example with 3 neurons: an input neuron, one hidden neuron, and an output neuron. This was easy to derive, but how would I do the same with more neurons? I'm not talking about adding more layers, I'm just talking about adding more neurons to the already existing three layers: the input, hidden, and output layer.

My first guess would be to use the equations I've derived for the network with just 3 neurons and 3 layers and iterate across all possible paths to each of the output neurons in the larger network, updating each weight. However, this would cause certain weights to be updated more than once. Can I just do this or is there a better method?

Sully Chen
  • 329
  • 2
  • 13
  • Can you clarify what you want to do? Maybe some picture? `this would cause certain weights to be updated more than once` It's not good sign – janisz Jul 20 '15 at 09:52
  • If you're interested in the backpropagation update rule, why don't you just look it up? If this is about doing it for fun/as an exercise, include the actual formulas so we know what you're trying specifically. – runDOSrun Jul 20 '15 at 15:25
  • The equations for backpropagation generalize to any number of neurons and layers by using matrices as opposed to a single scalar value as you probably used in the 1 neuron per layer example. – Alejandro Jul 20 '15 at 20:22

1 Answers1

0

If you want to larn more about backpropagation I recommend you to read this link from Standford University http://cs231n.github.io/optimization-2/, it will really help you to understand backprop and all the math underneath.

Guillem Cucurull
  • 1,681
  • 1
  • 22
  • 30