I am trying to run a ML model on google cloud ML. I am using pytorch and want to use the GPU. Using the standard Python3.6 version installed on the Google cloud VM, I get an error described below and tried solving it by upgrading the Python version to Python 3.7, but this version does not recognize the GPU that comes with the Google cloud VM.
Whenever I run my code (which works when ran locally) on the Google cloud VM (based on Python3.6) I get the error
python: symbol lookup error: /home/julsoles/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: PySlice_Unpack
Trying to find a solution online, I figured out that this was an issue with the version of Python 3.6 and the only solution was to upgrade my version of Python.
I was able to upgrade my version of Python to Python3.7 in the Google Cloud VM and can run code with this new version using the command Python3.7 file.py. Now, the issue is that whenever I run code using this version of Python, the VM does not recognize the GPU that comes with the system. I get the error
File "pred.py", line 75, in predict(model_list, test_dataset) File "pred.py", line 28, in predict x = Variable(torch.from_numpy(x).float()).cuda() File "/opt/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py", line 161, in _lazy_init _check_driver() File "/opt/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py", line 75, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Right now, the only solution I have found is to run my code just using cpu, but it is painstakingly slow. Is there any way to make Python3.7 recognize the GPU so that I can run my code using the GPU?
Thanks for your help!