I am working on a neural network project in which the data has non linear behavior which is implemented in C# and Encog.
My main objective is to predict the values.
I have some data(which is limited ) say like some 300 data sets. In these data set i have split the data for training and validation. The input of the network is 26 neurons and the output is 25. I had normalized the data and done the training.
My training method is ResilientPropagation and tried with various number of hidden layers.
network.AddLayer(new BasicLayer(null, true, 26));
network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
network.AddLayer(new BasicLayer(new ActivationLOG(), true, 50));
network.AddLayer(new BasicLayer(new ActivationTANH(), true, 50));
network.AddLayer(new BasicLayer(new ActivationLinear(), false, 25));
var train = new ResilientPropagation(network, foldedTrainingSet, 0.02, 10);
The problem now is that the training error is 200 and the validation error is way too high like 2000 or more.
I had tried with different number of layers and activation functions such as Log, tanH with various number of hidden neurons but there was no improvement to the error.
As of now my judgement is that, this error is due to the limitation of the data set(which is of non linear behaviour).
My question is, Can I improve my network for this non linear behavior with the current data set limit by using some different tactics or activation functions or training method.