0

I am trying to run LLama2 on my server which has mentioned nvidia card. It's a simple hello world case you can find here. However I am constantly running into memory issues:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 0; 7.92 GiB total capacity; 7.12 GiB already allocated; 241.62 MiB free; 7.18 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I tried

export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128

but same effect. Is there anything I can do?

wonglik
  • 1,043
  • 4
  • 18
  • 36

0 Answers0