I am having problem in increasing the accuracy of my Feed-Forward Neural network coded in python. I am not sure whether it's a genuine bug or just an incapability of my math functions but I am getting ambiguous outputs (like 0.5) No matter how much I increase the iterations....my code:-
from numpy import exp, array, random, dot
class NeuralNetwork():
def __init__(self):
random.seed(1)
self.synaptic_weights = 2 * random.random((3, 1)) - 1 # MM reuslt = 3 (3 * 1)
def Sigmoid(self, x):
return 1 / (1 + exp(-x))
def Sigmoid_Derivative(self, x):
return x * (1 - x)
def train(self, Training_inputs, Training_outputs, iterations):
output = self.think(Training_inputs)
print ("THe outputs are: -", output)
erorr = Training_outputs - output
adjustment = dot(Training_inputs.T, erorr * self.Sigmoid_Derivative(output))
print ("The adjustments are:-", adjustment)
self.synaptic_weights += output
def think(self, inputs):
Training_inputs = array(inputs)
return self.Sigmoid(dot(inputs, self.synaptic_weights))
# phew! the class ends..
if __name__ == "__main__":
neural_network = NeuralNetwork()
print("Random startin weights", neural_network.synaptic_weights)
Training_inputs = array([[1, 1, 1],
[0, 0, 0],
[1, 0, 1],]) # 3 rows * 3 columns???
Training_outputs = array([[1, 1, 0]]).T
neural_network.train(Training_inputs, Training_outputs, 0)
print ("New synaptic weights after training: ")
print (neural_network.synaptic_weights)
# Test the neural network with a new situation.
print ("Considering new situation [1, 0, 0] -> ?: ")
print (neural_network.think(array([1, 0, 0])))
While these are my outputs:=>
[Running] python -u "/home/neel/Documents/VS-Code_Projects/Machine_Lrn(PY)/test.py"
Random startin weights [[-0.16595599]
[ 0.44064899]
[-0.99977125]]
THe outputs are: - [[0.3262757 ]
[0.5 ]
[0.23762817]]
The adjustments are:- [[0.10504902]
[0.14809799]
[0.10504902]]
New synaptic weights after training:
[[ 0.16031971]
[ 0.94064899]
[-0.76214308]]
Considering new situation [1, 0, 0] -> ?:
[0.5399943]
[Done] exited with code=0 in 0.348 seconds
[Running] python -u "/home/neel/Documents/VS-Code_Projects/Machine_Lrn(PY)/tempCodeRunnerFile.py"
Random startin weights [[-0.16595599]
[ 0.44064899]
[-0.99977125]]
THe outputs are: - [[0.3262757 ]
[0.5 ]
[0.23762817]]
The adjustments are:- [[0.10504902]
[0.14809799]
[0.10504902]]
New synaptic weights after training:
[[ 0.16031971]
[ 0.94064899]
[-0.76214308]]
Considering new situation [1, 0, 0] -> ?:
[0.5399943]
[Done] exited with code=0 in 3.985 seconds
I have tried changing the iterations but the difference is very minor. I think the problem might be in one of my math(Sigmoid) functions. Other than that I think the dot multiplication at line 20 maybe a problem 'coz the adjustments look shifty to me....
Also, doesn't the 0.5 indicate that my network isn't learning as in it is just making a random guess?
P.S:- I think my problem isn't a duplicate one as it deals with the 'accuracy' of said model, while the question linked deals with the 'unwanted outputs'