I have a aws machine with 4 GPUs:
00:03.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
00:04.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
00:05.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
00:06.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
and my theanorc file looks like this:
[global]
floatX = float32
device = gpu0
[lib]
cnmem = 1
When I open one jupyter notebook and import theano I get the following (which I assume is only using one GPU):
Using Theano backend.
Using gpu device 0: GRID K520 (CNMeM is enabled with initial size: 95.0% of memory, cuDNN 5105)
/home/sabeywardana/anaconda3/lib/python3.5/site-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.
However, if I open a second jupyter notebook on the same machine at the same time. Then I get the error:
ERROR (theano.sandbox.cuda): ERROR: Not using GPU. Initialisation of device 0 failed:
initCnmem: cnmemInit call failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY. numdev=1
ERROR (theano.sandbox.cuda): ERROR: Not using GPU. Initialisation of device gpu failed:
initCnmem: cnmemInit call failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY. numdev=1
If I manually change my .theanorc to use gpu1 then the second jupyter notebook works fine. So the question is: Is there a way to configure .theanorc to just get the available GPU?