5

I am trying to train a simple neural network with Pybrain. After training I want to confirm that the nn is working as intended, so I activate the same data that I used to train it with. However every activation outputs the same result. Am I misunderstanding a basic concept about neural networks or is this by design?

I have tried altering the number of hidden nodes, the hiddenclass type, the bias, the learningrate, the number of training epochs and the momentum to no avail.

This is my code...

from pybrain.tools.shortcuts import buildNetwork                                
from pybrain.datasets import SupervisedDataSet                                  
from pybrain.supervised.trainers import BackpropTrainer

net = buildNetwork(2, 3, 1)  
net.randomize()                                                    

ds = SupervisedDataSet(2, 1)                                                       
ds.addSample([77, 78], 77)                                                         
ds.addSample([78, 76], 76)                                                         
ds.addSample([76, 76], 75)                                                         

trainer = BackpropTrainer(net, ds)                                                 
for epoch in range(0, 1000):                                                                   
    error = trainer.train()                                                                    
    if error < 0.001:                                                                          
        break                                                      

print net.activate([77, 78])                                                       
print net.activate([78, 76])                                                       
print net.activate([76, 76])  

This is an example of what the results can be... As you can see the output is the same even though the activation inputs are different.

[ 75.99893007]
[ 75.99893007]
[ 75.99893007]
  • Have you randomized the initial synapsis strength between the nodes? – Geeky Guy Jun 10 '13 at 20:16
  • with net.randomize? I had tried that already but I added it back in just in case and still the same issue is occurring. I've updated my code example to reflect this. –  Jun 10 '13 at 20:21
  • For an ANN to work properly, its synapses must be randomized when its generated. When they all have the same strength, you do get the same output for every neuron on the last layer, so I really, really thought it was that. – Geeky Guy Jun 10 '13 at 21:27
  • 2
    In the end I solved this by normalizing the data between 0 and 1 and also training until the error rate hit 0.00001. It takes much longer to train, but I do get accurate results now. –  Jun 11 '13 at 20:57

2 Answers2

4

I had a similar problem, I was able to improve the accuracy (I.E. get different answer for each input) by doing the following.

  1. Normalizing/Standardizing input and output to the neural network

Will
  • 76
  • 6
2

In the end I solved this by normalizing the data between 0 and 1 and also training until the error rate hit 0.00001. It takes much longer to train, but I do get accurate results now.

  • 1
    The code is in the question. Just change the bit where it says `if error < 0.001` to `if error < 0.00001`. I also had to pre-normalize the data so that all the numbers were between 0 and 1. –  Oct 20 '15 at 21:50