I had been using T4 GPU with 8 virtual CPUs and 30 GB RAM Virtual Machine for more than a month without any issues.
I have been running some deep learning training jobs on it.
But since yesterday, I have been trying some new models on a larger dataset and I noticed that in the beginning the training runs fast (like it always normally did). But after one or two epochs, the GPU slows down considerably, taking five times more time per step!!!
On checking GPU usage with nvidia-smi, I noticed that initially the power usage fluctuates around 70 W (which is the limit) but then when GPU slows down, the power usage is also fluctuating around only 49 W.
It seems that the VM automatically throttles the GPU performance due to overheating/high power usage? I would like to know if I am doing something wrong that's getting me in to this situation. I couldn't find anything in the documentation on the subject.
Thanks