1

My gpu is gtx 2070. I have followed every steps from https://github.com/rapidsai/cudf(i use the step"for CUDA 10.1") but no luck. I can't use my gpu power. I have also reinstalled the ubuntu os and those drivers for many times. Anyone know how to slove this problem? I have been struggling in this step for few months..Appreciate it!!!!!

OS: ubuntu 16.04
Driver version: 430.64
CUDA Version: 10.1
python=3.6
cudf==0.13.0
the version is compatible link but why i can't run code with my gpu? Everytime I run my code in terminal, it show these error:

Traceback (most recent call last):
File "/home/user/Documents/test.py", line 5, in <module>
import cudf
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/__init__.py", line 7, in 
<module>
from cudf import core, datasets
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/core/__init__.py", line 3, in 
<module>
from cudf.core import buffer, column
 File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/core/column/__init__.py", line 
1, in <module>
from cudf.core.column.categorical import CategoricalColumn  # noqa: F401
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/core/column/categorical.py", 
line 11, in <module>
import cudf._libxx as libcudfxx
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/_libxx/__init__.py", line 5, in 
<module>
from . import (
File "cudf/_libxx/aggregation.pxd", line 9, in init cudf._libxx.reduce
File "cudf/_libxx/aggregation.pyx", line 11, in init cudf._libxx.aggregation
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/utils/cudautils.py", line 7, in 
<module>
from numba import cuda, numpy_support
ImportError: cannot import name 'numpy_support'<br>

Code that i run:

import cupy as cp
import cudf
import pandas as pd
import glob

for f in glob.glob("/home/user/Documents/btc_test.csv"):
    data=cudf.read_csv(f)
    num=data.iloc[1:5]['low']
    numcp=cp.log(num)
    print(numcp)
talonmies
  • 70,661
  • 34
  • 192
  • 269
Steve
  • 21
  • 6

2 Answers2

1

I had the same error. This command worked for me in an Anaconda environment with Python 3.6. I have also Cuda 10.1 installed, so please make sure you use your installed version instead.

conda install -c rapidsai -c nvidia -c conda-forge \
    -c defaults cudf=0.14 python=3.6 cudatoolkit=10.1

Reference: https://rapids.ai/start.html

MZe
  • 148
  • 2
  • 5
0

Thanks for the answer.I have found out that everytime when u want to use gpu instead of cpu for processing, u have to type this commandsource activate dask-cudf.

Steve
  • 21
  • 6
  • Steve, that doesn't sound right. The RAPIDS libraries are where the GPU acceleration magic happens. Sounds like you installed RAPIDS in a separate conda environment instead of in base, which awesome works if that is what you wanted...but sounds like it wasn't... – TaureanDyerNV Aug 24 '20 at 21:41
  • @TaureanDyerNV thankyou for telling me the problem. Yeah everytime i have to type the command above in order to use gpu. So now i have built two environment...? Does it affects the performance or some other things? – Steve Aug 25 '20 at 14:30