I have a node with 2 Tesla P100 gpu's on it.
When i run rapids.tsa.ARIMA (or ESM), it will only utilise one of the GPU's. Is there a way to utilise multi-gpu's for training the models? Like as in rapids-dask-xgboost ?
Not according to cuml documentation. For multi-node multi-GPU check the column Notes on the table.
A multi node multi gpu implementation of the time series models, Arima and Holtwinters is not present. I would recommend creating a feature request on cuML's github repository (https://github.com/rapidsai/cuml).