4

I am getting an error "Can't differentiate w.r.t. type " when using the autograd function in python.

Basically, I am trying to write code for a generalized linear model (GLM) and I want to use autograd to get a function that describes the derivative of the loss function with respect to w (the weights), which I would then plug into scipy.optimize.minimize().

Before doing the scipy step, I have been trying to test that my function works by inputting values for the variables (which in my case are arrays) and printing a value (again as an array) for the gradient as an output. Here is my code:

def generate_data(n,k,m):
    w = np.zeros((k,1)) # make first column of weights all zeros
    w[:,[0]] = np.random.randint(-10, high=10,size=(k,m)) # choose length random inputs between -10 and 10
    x = np.random.randint(-10, high=10,size=(n,m)) # choose length random inputs between -10 and 10

return x,w

def logpyx(x,w):
    p = np.exp(np.dot(x,w.T)) # get exponentials e^wTx
    norm = np.sum(p,axis=1) # get normalization constant (sum of exponentials)
    pnorm = np.divide(p.T,norm).T # normalize the exponentials 

    ind = [] # initialize empty list
    for n in np.arange(0,len(x)):
        ind.append(np.random.choice(len(w),p = pnorm[n,:])) # choose index where y = 1 based on probabilities

    ind = np.array(ind) # recast list as array

    ys = [] # initialize empty list
    for n in np.arange(0,len(x)):
        y = [0] * (len(w)-1) # initialize list of zeros
        y.insert(ind[n],1) # assign value "1" to appropriate index in row
        ys.append(y) # add row to matrix of ys

    y = np.array(ys) # recast list as array

    pyx = np.diagonal(np.dot(pnorm,y.T)) # p(y|x)

    log_pyx = np.log(pyx)

    return log_pyx

# input data
n = 100 # number of data points
C = 2 # number of classes (e.g. turn right, turn left, move forward)
m = 1 # number of features in x (e.g. m = 2 for # of left trials and # of right trials)

log_pyx = logpyx(x,w) # calculate log likelihoods

grad_logpyx = grad(logpyx) # take gradient of log_pyx to find updated weights

x,w = generate_data(n,C,m)

print(grad_logpyx(x,w))

So when I do this, everything runs fine until the last line, where I get the error mentioned previously.

I clearly don't understand how to use autograd very well and I must be putting something in the wrong format since the error seems to be related to data type mismatch. Any help would be greatly appreciated!

I. Stone
  • 61
  • 1
  • 3

1 Answers1

3

The issue is that at least one input to logpyx() is a scalar (it would be either x or w from generate_data()). Here is some code that replicates your error:

from autograd import grad, numpy as anp

f = lambda x: 100 * (x[1] - x[0]**2) ** 2 + (1 - x[0])**2
x0 = anp.array([-2, 2])

grad_f = grad(f)
x1 = grad_f(x0)

TypeError: Can't differentiate w.r.t. type <class 'numpy.int64'>

Change the input to x0 = anp.array([-2. 2.]) and it works.

Jacob Stern
  • 3,758
  • 3
  • 32
  • 54