I am building a tensorflow environment with Jupyterhub(docker spawner) for my students in class, but I face a problem with this.
By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. (from https://www.tensorflow.org/tutorials/using_gpu)
If anyone in class use python program with gpu,then the gpu memory will nearly exhaust.According to this situation,I need to add some limit code manually. like:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
But this is not a great solution.I should add this code every time when new code generated.
If jupyterhub can add some config to avoid this situation or other great solutions? Please let me know,thanks!