5

I see zero difference in PYMC3 speed when using GPU vs. CPU.

I am fitting a model that requires 500K+ samples to converge. Obviously it is very slow, so I tried to speed things up with GPU (using GPU instance on EC2). Theano reports to be using GPU, so I believe CUDA/Theano are configured correctly. However, I strongly suspect that Pymc3 is not utilising GPU.

  • do I need to set my variables to TensorType(float32, scalar) explicitly? Currently, they are float64.
  • Are only some samplers/likelihoods can benefit from CUDA? I am fitting Poisson-based model and so using Metropolis sampler, not NUTS
  • is there a way to check that pymc3 is using GPU?
volodymyr
  • 7,256
  • 3
  • 42
  • 45
  • 1
    I have just run the example of a [linear regression in the PyMC3 web page](https://pymc-devs.github.io/pymc3/getting_started/#a-motivating-example-linear-regression) and could not find any acceleration either. I think we need information on how to setup models so that theano will use GPUs (or multi threads). – Ramon Crehuet Jan 26 '16 at 15:56
  • 1
    Curious what makes you think the GPU is not being used. Are you able to run, e.g., an nvidia-smi command to check usage? I've had some success with speedup using the GPU, but not for large problems. It sounds like there is still work to be done for pymc3 to exploit the GPU, per http://andrewgelman.com/2015/10/15/whats-the-one-thing-you-have-to-know-about-pystan-and-pymc-click-here-to-find-out/#comment-247543 – inversion Jan 27 '16 at 21:52
  • We have tested with nvidia-smi - GPU is not being used while running the sampler. Theano is reporting to be using GPU, however. – volodymyr Jan 29 '16 at 18:46
  • 1
    On the theano 0.8.0 documentation (http://deeplearning.net/software/theano/) it says: _transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.*(float32 only)*_. Have you tried setting the data type to float32, as you say in your question? – Luciano Mar 31 '16 at 13:52

0 Answers0