3

I am setting up an internal Jupyterhub on a multi GPU server. Jupyter access is provided through a docker instance. I'd like to limit access for each user to no more than a single GPU. I'd appreciate any suggestion or comment. Thanks.

venergiac
  • 7,469
  • 2
  • 48
  • 70
Dinesh K.
  • 323
  • 5
  • 10
  • I don't think Docker has much to say at this. If you are able to limit your app to use a single GPU without using Docker, then you should be also able to do it with Docker. – Salem Feb 17 '17 at 20:06
  • I can't control the apps people are going to run. This is supposed to be a teaching machine. I want to limit the resources available to a single user, to prevent any abuse. – Dinesh K. Feb 18 '17 at 20:16

3 Answers3

2

You can try it with nvidia-docker-compose

version: "2"
services
  process1:
    image: nvidia/cuda
    devices:
      - /dev/nvidia0
opHASnoNAME
  • 20,224
  • 26
  • 98
  • 143
1

The problem can be solved in this way, just add the environment variable “NV_GPU” before “nvidia-docker” as follow:


 [root@bogon ~]# NV_GPU='4,5' nvidia-docker run -dit --name tf_07 tensorflow/tensorflow:latest-gpu /bin/bash
e04645c2d7ea658089435d64e72603f69859a3e7b6af64af005fb852473d6b56
[root@bogon ~]# docker attach tf_07
root@e04645c2d7ea:/notebooks#
root@e04645c2d7ea:/notebooks# ll /dev
total 4
drwxr-xr-x  5 root root      460 Dec 29 03:52 ./
drwxr-xr-x 22 root root     4096 Dec 29 03:52 ../
crw--w----  1 root tty  136,   0 Dec 29 03:53 console
lrwxrwxrwx  1 root root       11 Dec 29 03:52 core -> /proc/kcore
lrwxrwxrwx  1 root root       13 Dec 29 03:52 fd -> /proc/self/fd/
crw-rw-rw-  1 root root   1,   7 Dec 29 03:52 full
drwxrwxrwt  2 root root       40 Dec 29 03:52 mqueue/
crw-rw-rw-  1 root root   1,   3 Dec 29 03:52 null
crw-rw-rw-  1 root root 245,   0 Dec 29 03:52 nvidia-uvm
crw-rw-rw-  1 root root 245,   1 Dec 29 03:52 nvidia-uvm-tools
crw-rw-rw-  1 root root 195,   4 Dec 29 03:52 nvidia4
crw-rw-rw-  1 root root 195,   5 Dec 29 03:52 nvidia5
crw-rw-rw-  1 root root 195, 255 Dec 29 03:52 nvidiactl
lrwxrwxrwx  1 root root        8 Dec 29 03:52 ptmx -> pts/ptmx
drwxr-xr-x  2 root root        0 Dec 29 03:52 pts/
crw-rw-rw-  1 root root   1,   8 Dec 29 03:52 random
drwxrwxrwt  2 root root       40 Dec 29 03:52 shm/
lrwxrwxrwx  1 root root       15 Dec 29 03:52 stderr -> /proc/self/fd/2
lrwxrwxrwx  1 root root       15 Dec 29 03:52 stdin -> /proc/self/fd/0
lrwxrwxrwx  1 root root       15 Dec 29 03:52 stdout -> /proc/self/fd/1
crw-rw-rw-  1 root root   5,   0 Dec 29 03:52 tty
crw-rw-rw-  1 root root   1,   9 Dec 29 03:52 urandom
crw-rw-rw-  1 root root   1,   5 Dec 29 03:52 zero
root@e04645c2d7ea:/notebooks#

or,read nvidia-docker of github's wiki

shouhuxianjian
  • 129
  • 1
  • 4
1

There are 3 options.

Docker with NVIDIA RUNTIME (version 2.0.x)

According to official documentation

docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=2,3

nvidia-docker (version 1.0.x)

based on a popular post

nvidia-docker run .... -e CUDA_VISIBLE_DEVICES=0,1,2

(it works with tensorflow)

programmatically

import os
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2"
venergiac
  • 7,469
  • 2
  • 48
  • 70