2

I am working on a neural network project in which the data has non linear behavior which is implemented in C# and Encog.

My main objective is to predict the values.

I have some data(which is limited ) say like some 300 data sets. In these data set i have split the data for training and validation. The input of the network is 26 neurons and the output is 25. I had normalized the data and done the training.

My training method is ResilientPropagation and tried with various number of hidden layers.

        network.AddLayer(new BasicLayer(null, true, 26));
        network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
        network.AddLayer(new BasicLayer(new ActivationLinear(), false, 25));

var train = new ResilientPropagation(network, foldedTrainingSet, 0.02, 10);

The problem now is that the training error is 200 and the validation error is way too high like 2000 or more.

I had tried with different number of layers and activation functions such as Log, tanH with various number of hidden neurons but there was no improvement to the error.

As of now my judgement is that, this error is due to the limitation of the data set(which is of non linear behaviour).

My question is, Can I improve my network for this non linear behavior with the current data set limit by using some different tactics or activation functions or training method.

tech47
  • 21
  • 5
  • Few questions. 1. Why so many hidden layers? 2. Did you try other training methods? 3. Jeff Heaton in his book as usually uses combination of training methods. For example resilent + back prop – Yuriy Zaletskyy Nov 16 '16 at 16:14
  • I have reduced the hidden layers to 4 layers and used back propagation and its showing some good progress. But if i train the network for more than 50K epochs, then the error is suddenly increasing . For eg, it started from 1500 and in 40K it will reach some 300 error. after that it will increase to 900 error and then decrease again. Can you tell me why is this happening? – tech47 Dec 14 '16 at 13:09
  • sometime it happens that neural network decreases error, then increase it and then again decrease. It is caused by local minima in which network jumped. – Yuriy Zaletskyy Dec 14 '16 at 14:43
  • Thanks, Is it possible to reduce the error to a range of 100? my data prediction looks like this- Expected: --- -3.480,4.480,-2.400,-3.630,-1.800,-1.650,-1.370,3.650,8.210,-1.940,5.200,7.470,-1.850,-3.330,6.660,7.060,-8.680,-15.540,0.580,5.670,-2.470,-5.500,-6.380,3.310,7.460 Got --------- -12.621,-1.279,0.543,-48.109,-3.884,-2.139,-11.137,0.189,12.924,3.302,3.509,7.992,-2.049,-5.826,2.565,-21.179,-1.717,5.097,-4.196,1.683,-5.480,-10.077,-34.225,-8.041,10.294 is there any specific methods / activation function/ bias config that i can change to improve it? – tech47 Dec 21 '16 at 13:32

0 Answers0