0

I am using CNN to do binary classification.

While the cross-entropy function is calculated by the code:

(-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL)))

when the algorithm predicts a value of 1.0, this cost function gives a divide by zero warning. How to deal with it? Or is there any way to avoid the prediction to become exactly 1.0, like any pre-processing skill?

When I use higher numerical precision, it works fine. But I am still curious about the warning.

林文烨
  • 53
  • 5

2 Answers2

0

This looks like try/except should handle the divide by zero error.

try:
    my_var = (-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL)))
except ZeroDivisionError:
    print("I tried to calculate the result, but I got a divide-by-zero error. Maybe you need to use higher precision.")
    exit()
aschultz
  • 1,658
  • 3
  • 20
  • 30
0

Try np.clip. This bounds the elements of an nd array between a min and a max value.

epsilon = 1e-10
clipped_AL = np.clip(AL, epsilon, 1. - epsilon)
(-1 / m) * np.sum(np.multiply(Y, np.log(clipped_AL)) + np.multiply(1 - Y, np.log(clipped_AL)))
Yash Sharma
  • 94
  • 1
  • 5