1

I am trying to make my own neural network that allows unlimited layers of neurons in python. it uses numpy array objects to model neurons, weights, inputs, and outputs. I have figured out how to deal with forward propagation, but am having trouble finding a formula to deal with backpropagation that can deal with multiple hidden layers.

def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
    for iteration in xrange(number_of_training_iterations):
        # Pass the training set through our neural network
        self.think(training_set_inputs)

        # backpropagation code...

def think(self, inputs):
        for layer in self.layers:
            inputs = self.__sigmoid(dot(inputs, layer.synaptic_weights))
            layer.outputs = inputs

from here I can access the output array of each layer of the network using self.layer[n].outputs

please post answer with python code or neat, well defined math functions.

Jonathan
  • 303
  • 4
  • 14
  • You may want to read through some literature. Neural networks are essentially generalized linear models. A GLM that would handle multiple (1/0) output would be considered multinomial. Here's an intro into this topic: http://data.princeton.edu/wws509/notes/c6.pdf – Jon Apr 12 '17 at 23:44
  • assuming you want `think` to handle a complete epoch or 'pass', it will have to do a full forward propagation through each layer, and then a full backprop pass through each layer. And just like you need `inputs` as a param to kick off the forward prop, you need an `outputs` or ground-truth param to kick off each backprop pass. – Max Power Apr 13 '17 at 00:16
  • @MaxPower This is just a small snippet of the code to give you an idea of the class structure. I know how to calculate the output of the neural network, and compare it to the expected output in order to get the error. From that point, I need a function to adjust the weights for a network with as many layers as specified. – Jonathan Apr 13 '17 at 03:10
  • Why does the number of layers matter? What goes wrong when using `for layer in self.layers` for backprop as you do for updating your weights in your forward-prop code above? Maybe I'm missing something. – Max Power Apr 13 '17 at 03:15
  • you mentioned this gives an idea of your class structure. but do you want think() to constitute an epoch (forward-prop and backwar prop), or just a forward-pass? – Max Power Apr 13 '17 at 03:17
  • 1
    @MaxPower I added the train function to make the code more clear. The problem is not that I don't know how to do the code. The problem is I need to know how to find the hidden layer errors, and use them to adjust the weights. – Jonathan Apr 13 '17 at 03:31
  • is the answer here helpful? I think the key you're missing after you call `think` (what I'd call 'forwardprop') in `train` is to 1) update cost J, and 2) call a `backprop(Y)`. This backprop function in turn does three things: 1) update errors based on errors, weights and gradient (called `sigmoid_prime` here for sigmoid activation, 2) update the gradient, 3) update the weights. So I'd suggest cntrl-f for "def train" and "def backpropagate" in this answer. http://stackoverflow.com/questions/34649152/neural-network-backpropagation-algorithm-not-working-in-python – Max Power Apr 13 '17 at 03:54
  • another good tutorial of neural nets including backprop in numpy. http://iamtrask.github.io/2015/07/12/basic-python-network/ – Max Power Apr 13 '17 at 04:55

0 Answers0