I am using Cross Entropy with Softmax as loss function for my neural network. The cross entropy function I have written is as follows:
def CrossEntropy(calculated,desired):
sum=0
n=len(calculated)
for i in range(0,n):
sum+=(desired[i] * math.log(calculated[i])) + ((1-desired[i])* math.log(1-calculated[i]))
crossentropy=(-1)*sum/n
return crossentropy
Now let us suppose the desired output is [1,0,0,0] and we are testing it for two calculated outputs i.e. a=[0.1,0.9,0.1,0.1] and b=[0.1,0.1,0.1,0.9]. The problem is that for both these calculated outputs will the function would return the exact same value for cross entropy. So how does the neural network learn that which output is the correct one ?