I've been trying to get Encog going for a few days now.
My data consists of 4 input variables (between 1 and 1000), and 1 output variable (between -30 and 30). I am training with around 50,000 rows of data.
The data is normalised (between -1 and 1 for the tanh activation function), before passing it into a neural network with the following structure and training:
Network.AddLayer(new BasicLayer(null, true, 4));
Network.AddLayer(new BasicLayer(new ActivationTANH(), true, 8));
Network.AddLayer(new BasicLayer(new ActivationTANH(), false, 1));
Network.Structure.FinalizeStructure();
Network.Reset();
IMLDataSet trainingData = new BasicMLDataSet(Input.ToArray(), ExpectedOutput.ToArray());
IMLTrain train = new ResilientPropagation(Network, trainingData);
int epoch = 1;
do
{
train.Iteration();
Console.WriteLine(@"Epoch #" + epoch + @" Error:" + train.Error);
epoch++;
} while (train.Error > 0.024);
The program then outputs each row's expected output along with the actual output from the neural network. Here is a screenshot of the output (a few rows): https://i.stack.imgur.com/7iglz.png
As you can see, the error (which is the average error of the neural network for all of the rows) must be below 0.024 before this output is printed. But many of the expected and actual outputs have a huge amount of error.
I believe that the neural network is not sensitive enough to the propagation. It seems that the actual outputs of the neural network are all very close together, I believe because they haven't deviated far from the initial random weights.
Can anyone suggest how I can fix this?
I have tried reducing the size of the inputs (I used 50), I also tried removing biases, and both of these lead to a similar result.