0

I have some problem with backpropagation learning using AForge.NET - Neuro Learning - Backpropagation . I actually try to implement neural network as in samples (Aproximation). My problem is about this: 1. input vector {1,2,3,...,19,20} 2. output vector {1,2,3,...,19,20} (it's linear function) 3. ActivationNetwork network = new ActivationNetwork(new BipolarSigmoidFunction(2), 1, 20, 1); 4. Then about 10k times - teacher.RunEpoch(input, output);

When learning is complete my network.Compute() returns values in [-1;1] Why?

In sample there is something like normalising values of vectors ( x -> [-1; 1] and y -> [-0.85; 0.85] ) and when I do it everything is fine... but it's only sample with which I want to learn about how neural networks working. My current problem which I want to implement is more complex (It more than 40 input neurons)

Can anyone help me?

DMan
  • 429
  • 4
  • 12

1 Answers1

2

I did not work with AForge yet, but the BipolarSigmoidFunction is most probably tanh, i.e. the output is within [-1, 1]. This is usually used for classification or sometimes for bounded regression. In your case you can either scale the data or use a linear activation function (e.g. identity, g(a) = a).

alfa
  • 3,058
  • 3
  • 25
  • 36
  • Thanks, I see I have to scale my input and output or use linear function, but is scaling enough to more complex problem where I have over 40 input neurons? – DMan May 12 '13 at 13:40
  • 1
    The number of inputs does not matter. The only thing that could happen is: when your input becomes very large (which is more likely with a higher number of inputs) sigmoid activation functions (e.g. logistic function, tanh) "saturate", i.e. the gradient becomes almost 0. Hence, learning will be very slow or even impossible. This is why it is usually recommended to scale your inputs to [-1, 1] and initialize the weights of a neural network with very small random numbers. – alfa May 12 '13 at 17:33