I am using python 3 with nvidia Rapids in order to speed up machine learning training using cuml library and a GPU.
My scrips also uses keras with GPU training (over tf) and when I reach the stage where I try to use CUML I get memory error. I suspect that this is happening because TF does not release the GPU memory (looking at nvidia-smi) I see that all the memory is allocated.
This is the code I use to train the cuml model
import cuml
from cuml import LinearRegression
lr = LinearRegression()
lr.fit(encoded_data, y_train)
this is the error I get
[2] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY
encoded_data and y_train are numpy arrays, encoded_data is n*m array of floats, and y_train is n*1 vector of integers that are labels, both are working fine when training with sklearn Logistic regression.
How can I either: 1.Use the same GPU (preferred) without loosing all the tf models I trained (I have more memory then the tf model takes in practice, But the tf process is still taking all the memory) 2.Use my second GPU for the CUML calculations (I can't find a way to select which GPU to run the RAPIDS CUML model training.