0

Last night I wrote a simple binary logistic regression python code. It seems to be working correctly (likelihood increases with each iteration, and I get good classification results).

My problem is that I can only initialize my weights with W = np.random.randn(n+1, 1) normal distribution.

But I don't want normal distribution, I want uniform distribution. But when I do that, I get the error

"RuntimeWarning: divide by zero encountered in log
  return np.dot(Y.T, np.log(predictions)) + np.dot((onesVector - Y).T, np.log(onesVector - predictions))"

this is my code

import numpy as np
import matplotlib.pyplot as plt

def sigmoid(x):
    return 1/(1+np.exp(-x))

def predict(X, W):
    return sigmoid(np.dot(X, W))

def logLikelihood(X, Y, W):
    m = X.shape[0]
    predictions = predict(X, W)
    onesVector = np.ones((m, 1))
    return np.dot(Y.T, np.log(predictions)) + np.dot((onesVector - Y).T, np.log(onesVector - predictions))

def gradient(X, Y, W):
    return np.dot(X.T, Y - predict(X, W))

def successRate(X, Y, W):
    m = Y.shape[0]
    predictions = predict(X, W) > 0.5
    correct = (Y == predictions)
    return 100 * np.sum(correct)/float(correct.shape[0])

trX = np.load("binaryMnistTrainX.npy")
trY = np.load("binaryMnistTrainY.npy")
teX = np.load("binaryMnistTestX.npy")
teY = np.load("binaryMnistTestY.npy")

m, n = trX.shape
trX = np.concatenate((trX, np.ones((m, 1))),axis=1)
teX = np.concatenate((teX, np.ones((teX.shape[0], 1))),axis=1)
W = np.random.randn(n+1, 1)

learningRate = 0.00001
numIter = 500

likelihoodArray = np.zeros((numIter, 1))

for i in range(0, numIter):
    W = W + learningRate * gradient(trX, trY, W)
    likelihoodArray[i, 0] = logLikelihood(trX, trY, W)

print("train success rate is %lf" %(successRate(trX, trY, W)))
print("test success rate is %lf" %(successRate(teX, teY, W)))

plt.plot(likelihoodArray)
plt.show()

If i initialize my W to be zeros or randn then it works. If I initialize it to random (not normal) or ones, then I get the division by zero thing.

Why does this happen and how can I fix it?

Oria Gruber
  • 1,513
  • 2
  • 22
  • 44
  • 2
    I guess that from the mathematics side of things whatever is in np.log should be positive, so you could check if this holds for the two np.log you have – mkarts Aug 12 '16 at 11:25
  • 1
    @MichaelKarotsieris They are positive. Notice that we use log on predictions, and predictions is output of sigmoid function, which is always positive ofcourse. – Oria Gruber Aug 12 '16 at 12:19
  • @OriaGruber yeah but it can be zero, for which log is -INF. Checkout http://stackoverflow.com/questions/13497891/python-getting-around-division-by-zero – Thomas Jungblut Aug 12 '16 at 12:45

0 Answers0