I had to change my runtime type to GPU in collab as otherwise, the RAM was crashing. However, when I use GPU I am getting an error while executing the scipy minimization. The error is as follows :-
------Start--------
Traceback (most recent call last):
File "<ipython-input-8-4ca37ba86fbb>", line 119, in train
result=minimize(objective,val,constraints=cons,options={"disp":True})
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/_minimize.py", line 618, in minimize
constraints, callback=callback, **options)
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in _minimize_slsqp
for c in cons['ineq']]))
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in <listcomp>
for c in cons['ineq']]))
File "<ipython-input-8-4ca37ba86fbb>", line 64, in constraint
return -(A @ v)+alpha # scipy proves >= for constraints
File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
------End--------
How to get rid of this problem ? Which tensor do I need to copy to the host memory? I have the objective to minimize and a constraint as follows:-
#Declaring the minimization equation here
def objective(x):
alpha = x[0]
v=x[1:len(x)]
vnorm=torch.linalg.vector_norm(v) * torch.linalg.vector_norm(v)
return alpha+(vnorm/2)
#Declaring the constraint here
def constraint(x):
alpha=x[0]
v=x[1:len(x)]
return -(A @ v)+alpha
cons={'type':'ineq','fun':constraint}
result=minimize(objective,val,constraints=cons,options={"disp":True})