0

I am building a tensorflow environment with Jupyterhub(docker spawner) for my students in class, but I face a problem with this.

By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. (from https://www.tensorflow.org/tutorials/using_gpu)

If anyone in class use python program with gpu,then the gpu memory will nearly exhaust.According to this situation,I need to add some limit code manually. like:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

But this is not a great solution.I should add this code every time when new code generated.

If jupyterhub can add some config to avoid this situation or other great solutions? Please let me know,thanks!

Jim Su
  • 122
  • 2
  • 11

1 Answers1

2
import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

this works well

rayjang
  • 140
  • 11
  • It is considered good requited to explain a little about what you changed or what your code does. – Neil Nov 15 '17 at 07:38
  • 1
    your tensorflow allocate all of your memory automatically. – rayjang Nov 15 '17 at 07:42
  • so I limit the usage of gpu memory by using 'fraction' configuration'. In my code, I will allocate 20% of my GPU memory to tensorflow. In mycase, I allocate 2GB – rayjang Nov 15 '17 at 07:44