Hi I have the following function:
sum from 1 to 5000 -log(1−(xi)^2) -log(1-(a_i)^t*x), where a_i is a random vector and we are trying to minimize this function's value via Netwon's method.
I need a way to calculate the Hessian matrix with respect to (x1, x2, x3, ...). I tried auto-gradient but it took too much time. Here is my current time.
from autograd import elementwise_grad as egrad
from autograd import jacobian
import autograd.numpy as np
x=np.zeros(5000);
a = np.random.rand(5000,5000)
def f (x):
sum = 0;
for i in range(5000):
sum += -np.log(1 - x[i]*x[i]) - np.log(1-np.dot(x,a[i]))
return sum;
df = egrad(f)
d2f = jacobian(egrad(df));
print(d2f(x));
I have tried looking into sympy but I am confused on how to proceed.