1

I'm trying to implement vectorized logistic regression in python using numpy. My Cost function (CF) seems to work OK. However there is a problem with gradient calculation. It returns 3x100 array whereas it should return 3x1. I think there is a problem with the (hypo-y) part.

def sigmoid(a):
   return 1/(1+np.exp(-a))    

def CF(theta,X,y):
   m=len(y)
   hypo=sigmoid(np.matmul(X,theta))
   J=(-1./m)*((np.matmul(y.T,np.log(hypo)))+(np.matmul((1-y).T,np.log(1-hypo))))
   return(J)

def gr(theta,X,y):
    m=len(y)
    hypo=sigmoid(np.matmul(X,theta))

    grad=(1/m)*(np.matmul(X.T,(hypo-y)))

    return(grad)

X is a 100x3 arrray, y is 100x1, and theta is a 3x1 arrray. It seems both functions are working individually, however this optimization function gives an error:

optim = minimize(CF, theta, method='BFGS', jac=gr, args=(X,y)) 

The error: "ValueError: shapes (3,100) and (3,100) not aligned: 100 (dim 1) != 3 (dim 0)"

MB-F
  • 22,770
  • 4
  • 61
  • 116
efeatikkan
  • 13
  • 1
  • 3
  • 1
    Please show how you invoke the functions with example input. I think this has a lot to do with the resulting shape. – MB-F Oct 05 '17 at 14:35
  • My X input is a 100X3 arrray, y input is a 100X1, and theta input is a 3X1 arrray. Now it seems both functions working individually however this optimization function gives an error: optim= minimize(CF, theta, method='BFGS', jac=gr, args=(X,y)) The error: "ValueError: shapes (3,100) and (3,100) not aligned: 100 (dim 1) != 3 (dim 0)" Thank you for you interest! – efeatikkan Oct 05 '17 at 19:13

1 Answers1

2

I think there is a problem with the (hypo-y) part.

Spot on!

hypo is of shape (100,) and y is of shape (100, 1). In the element-wise - operation, hypo is broadcasted to shape (1, 100) according to numpy's broadcasting rules. This results in a (100, 100) array, which causes the matrix multiplication to result in a (3, 100) array.

Fix this by bringing hypo into the same shape as y:

hypo = sigmoid(np.matmul(X, theta)).reshape(-1, 1)  # -1 means automatic size on first dimension

There is one more issue: scipy.optimize.minimize (which I assume you are using) expects the gradient to be an array of shape (k,) but the function gr returns a vector of shape (k, 1). This is easy to fix:

return grad.reshape(-1)

The final function becomes

def gr(theta,X,y):
    m=len(y)
    hypo=sigmoid(np.matmul(X,theta)).reshape(-1, 1)
    grad=(1/m)*(np.matmul(X.T,(hypo-y)))
    return grad.reshape(-1)

and running it with toy data works (I have not checked the math or the plausibility of the results):

theta = np.reshape([1, 2, 3], 3, 1)    
X = np.random.randn(100, 3)
y = np.round(np.random.rand(100, 1))    

optim = minimize(CF, theta, method='BFGS', jac=gr, args=(X,y))
print(optim)
#      fun: 0.6830931976615066
# hess_inv: array([[ 4.51307367, -0.13048255,  0.9400538 ],
#       [-0.13048255,  3.53320257,  0.32364498],
#       [ 0.9400538 ,  0.32364498,  5.08740428]])
#      jac: array([ -9.20709950e-07,   3.34459058e-08,   2.21354905e-07])
#  message: 'Optimization terminated successfully.'
#     nfev: 15
#      nit: 13
#     njev: 15
#   status: 0
#  success: True
#        x: array([-0.07794477,  0.14840167,  0.24572182])
MB-F
  • 22,770
  • 4
  • 61
  • 116