My desktop has two gpus which can run Tensorflow with specification /gpu:0
or /gpu:1
. However, if I don't specify which gpu to run the code, Tensorflow will by default to call /gpu:0
, as we all know.
Now I would like to setup the system such that it can assign gpu dynamically according to the free memory of each gpu. For example, if a script doesn't specify which gpu to run the code, the system first assigns /gpu:0
for it; then if another script runs now, it will check whether /gpu:0
has enough free memory. If yes, it will continue assign /gpu:0
to it, otherwise it will assign /gpu:1
to it. How can I achieve it?
Follow-ups: I believe the question above may be related to the virtualization problem of GPU. That is to say, if I can virtualize multi-gpu in a desktop into one GPU, I can get what I want. So beside any setup methods for Tensorflow, any ideas about virtualization is also welcome.