I'm having trouble trying to teach a neural network the XOR logic function. I've already trained the network with succesful results using the hyperbolic tangent and ReLU as activation functions (regarding the ReLU, I know it's not the appropiate for this kind of problem, but I still wanted to test it). Still, I can't make it work with the logistic function. My definition of the function is:
def logistic(data):
return 1.0 / (1.0 + np.exp(-data))
and its derivative:
def logistic_prime(data):
output = logistic(data)
return output * (1.0 - output)
where np
is the name given to the NumPy imported package. As the XOR logic uses are 0's and 1's, the logistic function should be an appropriate activation function. Still, the results I get are close to 0.5 in all cases, i.e. any input combination of 0's and 1's results in a value close to 0.5. Is there any error in what I'm saying?
Don't hesitate in asking me for more context or more code. Thanks in advance.