I watched a lecture and derived equations for back propagation, but it was in a simple example with 3 neurons: an input neuron, one hidden neuron, and an output neuron. This was easy to derive, but how would I do the same with more neurons? I'm not talking about adding more layers, I'm just talking about adding more neurons to the already existing three layers: the input, hidden, and output layer.
My first guess would be to use the equations I've derived for the network with just 3 neurons and 3 layers and iterate across all possible paths to each of the output neurons in the larger network, updating each weight. However, this would cause certain weights to be updated more than once. Can I just do this or is there a better method?