I have a neural network that contains 2 input neurons, 1 hidden layer containing 2 neurons, and one output neuron. I am using this neural network for the XOR problem, but it does not work.
Test Results:
If you test 1, 1 you get the output of -1 (equivalent to 0).
If you test -1, 1 you get the output of 1.
If you test 1, -1 you get the output of 1.
If you test -1, -1 you get the output of 1.
This last test is obviously incorrect, therefore my neural network is clearly wrong somewhere.
The exact output are here So as you can see, changing 1, 1 to -1, -1 just flips the output value And changing -1, 1 to 1, -1 does the same. This is obviously incorrect.
These are the steps my neural network goes through:
- Randomly assign the weights between -1, 1
- Forward propagate through the network (applying the tanh activation function to the hidden neurons and the output neuron)
- Back propagate using the algorithm explained at this website: https://stevenmiller888.github.io/mind-how-to-build-a-neural-network/ (is this algorithm correct?)
I need to know if I am missing something in my neural network, and I was also wondering if anyone knew of a tutorial for back propagation that is less mathematical and more pseudo-code style.
Also, I am not using biases or momentum within my neural network, so I was wondering if adding either of them would fix the issue?
I have tried a few different back propagation algorithms and none of them seem to work, so it is most likely not that.
Thanks for any help you can provide,
Finley Dabinett.
UPDATE:
I have added biases to the hidden and output layer and I get these results:
1, 1 = -1 - Correct
-1, -1 = 1 - Incorrect
-1, 1 = 1 - Correct
1, -1 = 1 - Correct