0

Is it possible to make "import mxnet" skip loading CUDA libraries when only inference on CPU is intended?

Context:

  • MXNet is installed with CUDA support and model trainings are usually run on GPU's
  • however inference can be run in parallel on many nodes without GPU and it would be nice to use the same MXNet build with CPU only but some libraries are not available on non-GPU nodes.
  • Can you install `mxnet-mkl` on a different Python environment? Or are you restricted to using exactly the same build? – Thom Lane Apr 04 '19 at 23:48
  • So how much overhead are you seeing? In my experience the overhead is quite small. Or are you restarting the Python process very frequently? – Thom Lane Apr 04 '19 at 23:50
  • Check out `MXNET_CUDNN_AUTOTUNE_DEFAULT` environment variable if autotune is causing your overhead. – Thom Lane Apr 04 '19 at 23:53

0 Answers0