0

I'm new to neural networks and I'm using Aforge Neural network library for a character recognition task. I want to use the back propagation to train my network. Here's the code given in the AForge Documentation.

// initialize input and output values
        double[][] input = new double[4][] {
                new double[] {0, 0}, new double[] {0, 1},
                new double[] {1, 0}, new double[] {1, 1}
            };
        double[][] output = new double[4][] {
                new double[] {0}, new double[] {1},
                new double[] {1}, new double[] {0}
            };
        // create neural network
        ActivationNetwork network = new ActivationNetwork(
            SigmoidFunction(2),
            2, // two inputs in the network
            2, // two neurons in the first layer
            1); // one neuron in the second layer
        // create teacher
        BackPropagationLearning teacher = new BackPropagationLearning(network);
        // loop

        while (!needToStop)
        {
            // run epoch of learning procedure
            double error = teacher.RunEpoch(input, output);
            // check error value to see if we need to stop
            // ...
        }

But I don't know how to decide the Number of layers and Neurons for the ActivationNetwork. Any help would be appreciated. Thanks.

kinath_ru
  • 4,488
  • 3
  • 21
  • 26

2 Answers2

0

I dont know exactly but it seems to me that network can return only two answers - 0 and 1. So one neuron is 0 and second 1, and second layer chooses max.

santoni7
  • 365
  • 2
  • 11
0

For XOR you need one hidden layer because truth table of output is 0,1,1,0 this means that you cant separate these patterns geometrically with one line.here is the proof optically. If you can divide the patterns space with one line you can you use one perceptron, typical cases are OR , AND. In these cases you dont use hidden layers because are linear separated. Try to make the corresponding graph to see it clearly and understand it. In all cases you have classes > 2 or classes are not linear separated, you must use hidden layers. For XOR you need one hidden layer (hidden layers are the computing layers of network) because one hidden layer is capable to divide into to classes. Now because of two classes we will have one output. This can calculated by this: outputNeurons<=2^n n=num of classes e.g if you have 3 classes you need 2 OutputNeurons becuase 2^2 = 4 < 3. Now in hidden layer we use to neurons becuase we have two lines in pic 1, the area between two lines called decision area 1 and out of lines called decision area 2. Because of that we will have two decision functions therefore we must have two perceptrons since the perceptron clasify one decision area and with math terms is the func σ(x) = Σw*x + w0 and in case of one percepton we usually use for activation the step function (stepFunc(σ) = 1 if σ>0 or stepFunc(σ) = 0 if s<=0). The fist neuron in hidden layer is for the first decision area and the second for second decision area. The fist decision area uses fist perceptron which separated the pattern (1,1) from others, similarly second perceptron separated the pattern (0,0) from others. In conclusion, take a look at the comments:
ActivationNetwork network = new ActivationNetwork( SigmoidFunction(2),// here is the activation func in this case Sigmoid 2, // two inputs in the network-> one table each time [0,1] [1,1] [1,0] [1,1] 2, // two neurons in the first layer-> hidden layer 1); // one neuron in the second layer-> output layer

The back error propagation is an algorithm that cant show you here how it works. You can see here for more details goole it.

hope it helps to understand better the whole idea of simple neuron networks but if you want to use these classes of Aforge my opinion is you must read the theory behind neural networks

ggeorge
  • 1,496
  • 2
  • 13
  • 19