public static double testElmanWithAnnealing(NeuralDataSet trainingSet,
NeuralDataSet validation,int maxEpoch)
{
// create an elman network
ElmanPattern pattern = new ElmanPattern();
pattern.setActivationFunction(new ActivationTANH());
pattern.setInputNeurons(trainingSet.getInputSize());
pattern.addHiddenLayer(8);
pattern.setOutputNeurons(trainingSet.getIdealSize());
BasicNetwork network = (BasicNetwork)pattern.generate();
network.reset();
// set up a hybrid strategy of resilient + simulated annealing
CalculateScore score = new TrainingSetScore(trainingSet)
final MLTrain trainAlt = new NeuralSimulatedAnnealing(
network, score, 10, 2, 100);
final MLTrain trainMain =
new ResilientPropagation(network, trainingSet);
trainMain.addStrategy(
new HybridStrategy(trainAlt,0.00001,100,3));
int epoch = 0;
do {
trainMain.iteration();
System.out
.println("Epoch #" + epoch + " Error:" + trainMain.getError());
epoch++;
} while(trainMain.getError() > 0.01 && epoch < maxEpoch);
int trueStuff = 0;
int falseStuff = 0;
for(MLDataPair pair: validation ) {
final MLData output = network.compute(pair.getInput());
System.out.println(
"actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
if(output.getData(0) * pair.getIdeal().getData(0) > 0)
trueStuff++;
else
falseStuff++;
}
System.out.println("true classifications:" + trueStuff);
System.out.println("false classifications:" + falseStuff);
return network.calculateError(validation);
}
I have 8 inputs of floating point variables normalized using a simple min/max scheme to values between -1 and 1.
Trying to classify into either a negative value or a positive value (binary classification). So in the training and validation set the ideal would be either 1 or -1.
Network always produces the same result, or it might have one or two results. For example: -0.05686225929855484 around 90% of the time and some other values occasionally.
- am I using encog wrong? does anything in the code stand out to you as a bug?
- can I do anything to punish such behaviour of the neural network?
- this is even worse than a random guess, surely there's a way to get better predictions. Thanks in advance.