0

I have a neural network that contains 2 input neurons, 1 hidden layer containing 2 neurons, and one output neuron. I am using this neural network for the XOR problem, but it does not work.

Test Results:

If you test 1, 1 you get the output of -1 (equivalent to 0).

If you test -1, 1 you get the output of 1.

If you test 1, -1 you get the output of 1.

If you test -1, -1 you get the output of 1.

This last test is obviously incorrect, therefore my neural network is clearly wrong somewhere.

The exact output are here So as you can see, changing 1, 1 to -1, -1 just flips the output value And changing -1, 1 to 1, -1 does the same. This is obviously incorrect.

These are the steps my neural network goes through:

I need to know if I am missing something in my neural network, and I was also wondering if anyone knew of a tutorial for back propagation that is less mathematical and more pseudo-code style.

Also, I am not using biases or momentum within my neural network, so I was wondering if adding either of them would fix the issue?

I have tried a few different back propagation algorithms and none of them seem to work, so it is most likely not that.

Thanks for any help you can provide,

Finley Dabinett.

UPDATE:

I have added biases to the hidden and output layer and I get these results:

1, 1 = -1 - Correct

-1, -1 = 1 - Incorrect

-1, 1 = 1 - Correct

1, -1 = 1 - Correct

  • 1
    just add biases and you will be fine. They not "addons", they are **crucial** bits in neural nets. – lejlot Feb 27 '17 at 20:10
  • @lejlot Do biases just act as extra inputs that have a constant value of 1 or -1? – Finley Dabinett Feb 27 '17 at 20:14
  • @lejlot And should I apply them to just the hidden layer or the output layer as well? – Finley Dabinett Feb 27 '17 at 20:23
  • you should have a bias in each layer (depending on your notation this means a bias in input and in a hidden layer or in hidden and output). "Additional inputs" is a good understanding for the input layer, for hidden it is less obvious but you can still think about it as an extra neuron always producing 1. – lejlot Feb 27 '17 at 20:28
  • @lejlot And the weights of the bias' get back propagated too? – Finley Dabinett Feb 27 '17 at 20:30
  • yes, otherwise it would change nothing. I mean you do back prop **to** them, there is nowhere to backprop **from** them – lejlot Feb 27 '17 at 20:31
  • @lejlot I have added bias to the hidden layer and the output layer and I have back propagated their weights. These are the results I am getting now: https://i.gyazo.com/51b3465613f2a3cbc56c48d29bb72cdd.png – Finley Dabinett Feb 27 '17 at 20:52
  • I have trained it multiple times and I always get similar outputs to this one. So I think I may have missed something else. What is the most simple back propagation tutorial you know of? – Finley Dabinett Feb 27 '17 at 20:54
  • I think you are missing nothing, must be an implementation issue. If you could provide us with the code which we can run, we could help more. – Tamas Hegedus Feb 27 '17 at 21:33
  • @TamasHegedus sketch.js: http://pastebin.com/YgWjGgG5 layers.js: http://pastebin.com/7fRW8gQy – Finley Dabinett Feb 27 '17 at 22:12
  • It requires the p5.js library to run. Unless you can just look at it and work out what it's doing. Also, sorry for the bad programming and bad layout, I started writing it a while back when I was new to it so it's a bit all over the place. – Finley Dabinett Feb 27 '17 at 22:13
  • @TamasHegedus any luck finding the issue? – Finley Dabinett Feb 28 '17 at 02:00
  • @FinleyDabinett Yeah sry it was late night in Hungary :) No need to apologize, your code is pretty clean. Although I did not manage to get your code work but found something: In the train function you only use the first three datapoints. `data[i % 3][0]` should be `data[i % 4]`. This causes some issues for sure, lets see if there are others. – Tamas Hegedus Feb 28 '17 at 10:53
  • @TamasHegedus I've managed to fix it and it now works for XOR. But, it does not work with real life applications. – Finley Dabinett Mar 01 '17 at 00:25
  • What real life applications and what network shape are we talking about? – Tamas Hegedus Mar 01 '17 at 07:42
  • @TamasHegedus I'm using a 2-2-1 neural network and I'm training it for the sinx function. But my outputs are either random or all the same value. I am using the same code that I used for my xor (which works), just different data. – Finley Dabinett Mar 01 '17 at 20:29
  • @TamasHegedus I have now completely fixed my neural network. The issue I was having in the first problem was my weights were being used in the wrong order. Then I had to change my backprop algorithm from my original one. Thanks for your help guys. – Finley Dabinett Mar 02 '17 at 00:18

0 Answers0