0

I had to change my runtime type to GPU in collab as otherwise, the RAM was crashing. However, when I use GPU I am getting an error while executing the scipy minimization. The error is as follows :-

------Start--------
Traceback (most recent call last):
  File "<ipython-input-8-4ca37ba86fbb>", line 119, in train
    result=minimize(objective,val,constraints=cons,options={"disp":True})
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/_minimize.py", line 618, in minimize
    constraints, callback=callback, **options)
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in _minimize_slsqp
    for c in cons['ineq']]))
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in <listcomp>
    for c in cons['ineq']]))
  File "<ipython-input-8-4ca37ba86fbb>", line 64, in constraint
    return -(A @ v)+alpha   # scipy proves >= for constraints
  File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in __array__
    return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

------End--------

How to get rid of this problem ? Which tensor do I need to copy to the host memory? I have the objective to minimize and a constraint as follows:-

#Declaring the minimization equation here

def objective(x):
    alpha = x[0]
    v=x[1:len(x)]
    vnorm=torch.linalg.vector_norm(v) * torch.linalg.vector_norm(v)
    return alpha+(vnorm/2)

#Declaring the constraint here

def constraint(x):
    alpha=x[0]
    v=x[1:len(x)]
    return -(A @ v)+alpha


cons={'type':'ineq','fun':constraint}
result=minimize(objective,val,constraints=cons,options={"disp":True})

Jeet
  • 359
  • 1
  • 6
  • 24

1 Answers1

0

Either val variable is torch.Tensor or matrix A, which is used in constraint function. So if val is torch.Tensor compute result with following line:

result = minimize(objective, val.cpu().numpy(), constraints=cons, options={"disp" : True})

That way vals transferred on host and turned to nd.array, as expected in documentation on minimize. Turning A to nd.array (if needed) can be done same way.

draw
  • 936
  • 3
  • 8
  • Yes, but why do I need to copy the tensor to cpu? If I am doing that then my session is crushing due to hig-RAM runtime. Cant I execute these tensors on GPU? Why is scipy not allowing that to happen and throwing the above error when run in GPUs? – Jeet Apr 23 '22 at 09:59
  • 1
    scipy working primarily with numpy, and numpy arrays have no GPU support. So when optimization algorithm tries to cast `torch.Tensor` on GPU to `nd.array` it fails due to this. – draw Apr 23 '22 at 10:58
  • -It is consuming the complete RAM space. How can I reduce it? The session is failing. Tensor A and the val are not very huge. size of A is 88 bytes and val is also somewhat like that. – Jeet Apr 23 '22 at 13:52