0

I built a docker contaiener to be deployed on HPC+GPU via singularity. when i run

cp.show_config()

OS: Linux-5.4.0-135-generic-x86_64-with-glibc2.31

Python Version: 3.9.12

CuPy Version: 9.6.0

CuPy Platform: NVIDIA CUDA

NumPy Version: 1.21.5

SciPy Version: 1.6.0

Cython Build Version: 0.29.24

Cython Runtime Version: 0.29.28

CUDA Root: /opt/conda/envs/rapids

nvcc PATH: None

CUDA Build Version: 11020

CUDA Driver Version: 11060

CUDA Runtime Version: CUDARuntimeError('cudaErrorNoDevice: no CUDA-capable device is detected')

cuBLAS Version: (available)

cuFFT Version: 10400

cuRAND Version: 10203

cuSOLVER Version: (11, 3, 4)

cuSPARSE Version: (available)

NVRTC Version: (11, 2)

Thrust Version: 101000

CUB Build Version: 101000

Jitify Build Version: 65946d2

cuDNN Build Version: None

cuDNN Version: None

NCCL Build Version: 21104

NCCL Runtime Version: 21210

cuTENSOR Version: None

cuSPARSELt Build Version: None

this is my dockerfile

# Pulls the basic Image from NVIDIA repository
FROM rapidsai/rapidsai:22.04-cuda11.2-runtime-ubuntu20.04-py3.9

# OS install cuda toolkit
RUN apt-get update
RUN apt-get install -y cuda-toolkit-11.2

# Pulls the basic Image from NVIDIA repository
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
    --fix-missing git python3-setuptools python3-pip build-essential libcurl4-gnutls-dev \
    zlib1g-dev rsync vim cmake tabix && \
    apt-get clean

# Conda install on base env cudnn
RUN conda install --yes -c conda-forge cudnn=8.0.5.39

# Adding env directory to path and activate rapids env
ENV PATH /opt/conda/envs/rapids/bin:$PATH
RUN /bin/bash -c "source activate rapids"

# Install libraries needed in the examples
RUN pip install \
    scanpy==1.9.1 wget pytabix dash-daq \
    dash-html-components dash-bootstrap-components dash-core-components \
    pytest utils tensorflow

RUN pip install --upgrade tensorflow-gpu

WORKDIR /workspace
ENV HOME /workspace

RUN mkdir -p /.singularity.d/env
RUN echo "#!/usr/bin/env bash" >  /.singularity.d/env/99-custom_prompt.sh
RUN echo 'PS1="[${SINGULARITY_NAME%.*}]\u@\h:\w\$ "' >>  /.singularity.d/env/99-custom_prompt.sh
RUN conda install batchspawner
fabio.geraci
  • 305
  • 2
  • 5
  • 18
  • What happens if you do `cupy.show_config()` in the base image `rapidsai/rapidsai:22.04-cuda11.2-runtime-ubuntu20.04-py3.9`? – kmaehashi Dec 16 '22 at 02:16

1 Answers1

0

I hope this could help other members

# Pulls the basic Image from NVIDIA repository
FROM rapidsai/rapidsai:22.04-cuda11.2-runtime-ubuntu20.04-py3.9

# OS install cuda toolkit
RUN apt-get update
RUN apt-get install -y software-properties-common \
    cuda-toolkit-11.2 \
    python3-setuptools

# Add Nvidia cudnn repository
ENV OS=ubuntu2004
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/cuda-${OS}.pin
RUN mv cuda-${OS}.pin /etc/apt/preferences.d/cuda-repository-pin-600
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/7fa2af80.pub
RUN add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/ /"
RUN apt-get update

# Fix Cuda and Cudnn version
ENV cudnn_version=8.1.1.33
ENV cuda_version=cuda11.2

# Install CUDNN
RUN apt-get install -y libcudnn8=${cudnn_version}-1+${cuda_version}
RUN apt-get install -y libcudnn8-dev=${cudnn_version}-1+${cuda_version}

# Adding env directory to path and activate rapids env
ENV PATH /opt/conda/envs/rapids/bin:$PATH
RUN /bin/bash -c "source activate rapids"

# Install libraries needed in the examples
RUN pip install \
    scanpy==1.9.1 \
    pytabix \
    dash-daq \
    dash-html-components \
    dash-bootstrap-components \
    dash-core-components \
    pytest \
    utils tensorflow

#
WORKDIR /workspace
ENV HOME /workspace
#
fabio.geraci
  • 305
  • 2
  • 5
  • 18