In this case, I'm using jupyter notebook on a VM for trainning some CNN models. the VM has 16v CPU with 60GB memory. And I just attched a NVIDIA TESLA P4 for better performance. But it always gives error like "RuntimeError: CUDA out of memory. Tried to allocate 196.00 MiB (GPU 0; 7.43 GiB total capacity; 2.20 GiB already allocated; 180.44 MiB free; 226.01 MiB cached)"
Why does it happen? The system is all clean. I want to know why I only have this small amount of memory free?
I think the GPU is set up without mistake
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P4 Off | 00000000:00:04.0 Off | 0 |
| N/A 38C P0 22W / 75W | 0MiB / 7611MiB | 2% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+