I am trying to make my own neural network that allows unlimited layers of neurons in python. it uses numpy array objects to model neurons, weights, inputs, and outputs. I have figured out how to deal with forward propagation, but am having trouble finding a formula to deal with backpropagation that can deal with multiple hidden layers.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
for iteration in xrange(number_of_training_iterations):
# Pass the training set through our neural network
self.think(training_set_inputs)
# backpropagation code...
def think(self, inputs):
for layer in self.layers:
inputs = self.__sigmoid(dot(inputs, layer.synaptic_weights))
layer.outputs = inputs
from here I can access the output array of each layer of the network using self.layer[n].outputs
please post answer with python code or neat, well defined math functions.