I am running a rather memory-intense (estimated to be around 6GB) GAN model with tf.keras which my GPU doesn't seem to be able to handle (predicting fails, reports only nans). Is there a way to support my 4GB GPU memory with system memory? Or a way to share the computational effort between GPU and CPU?
My specs:
- OS: Windows 10 64
- GPU: Geforce GTX 960 (4GB)
- CPU: Intel Xeon-E3 1231 v3 (4 cores)
- Python GUI: Spyder 5
- Python: 3.8.5 / 3.8.10 in a conda environment with only the tensorflow and chess module installed
- Tensorflow: 2.5
- CUDA: 11.2.2
- cudnn: 8.1.1
For more information see my very detailed version of this question I asked a couple of days ago (no responses, hence this one): TF model doesn't predict anymore after switching to GPU