I'm studying multilayer perceptrons and write simple net for classification points in 2D-space. Net trains by back-propagation algorithm with momentum.
Unfortunately, even while local error is going to zero, the global error is still very high and I cannot understand why. The output of global error in console ranges in [100, 150]. So, my main question: how can I reduce this error?
I, obviously, provide link to archive with my project. A few words about that: almost all parameters of net is in file libraries.h (input, hidden and output layer dimensions, learning rate, momentum rate, sigma and sigma derivative definitions) so if you want to play with that - here you go. Structure of net is in file perceptron.cpp, graphics library is in plot.cpp. To test project you should run it, click mouse left-button at the points on appeared window where you want to be centers of classes. Right-click on the window will generate random points in the circle of radius 5 around those centers and will train net with this points.
If somebody can provide some theoretical solution or even take a fresh look at my code, I will be very gratefull for that.
Asked
Active
Viewed 241 times
0

Ilja Kosynkin
- 107
- 1
- 14
-
When you used the debugger, which statement(s) are causing the issue? Have you tried outputting variables and different points in your code? – Thomas Matthews May 07 '15 at 19:31
-
Is your issue in reading data, passing data or calculating data? When you wrote tests for each function, which function is producing incorrect results? (See CPPUNIT or Google Unit Test) – Thomas Matthews May 07 '15 at 19:33
-
There is no issues with code's working. It runs OK. The problem that the global error (sum of deviations of net output and desired output) is quite high. – Ilja Kosynkin May 07 '15 at 19:38
-
Have you performed a propagation of error analysis? Are you converting between float and integer? Have you considered using Fixed Point arithmetic for better accuracy? – Thomas Matthews May 07 '15 at 19:57
-
If by "propagation of error analysis" you meant that did I check reduction of error by net output with desired output, yes, I did. Absolute value of deviation of net output and desired output falls below 0.0005. Yes, I have this kind of converting, but I checked it twice - there is no problem with that or I cannot find any clue of that. No, I didn't considered Fixed Point. Can you explain it? – Ilja Kosynkin May 07 '15 at 20:20
1 Answers
0
I successfully resolved the problem.
First of all, I had incorrect centers of groups of points, so this points became completely inseparable in 2D space.
Secondary, I had to rewrite process of training as picking random points from set.
And third one, I found that casting double to int isn't the best idea ever (very high lost of data).
Link to the final version of code: CLICK

Ilja Kosynkin
- 107
- 1
- 14