2

This code fills some GPU memory and doesn't let it go:

def checkpoint_mem(model_name):
    checkpoint = torch.load(model_name)
    del checkpoint
    torch.cuda.empty_cache()

Printing memory with the following code:

print(torch.cuda.memory_reserved(0))
print(torch.cuda.memory_allocated(0))

shows BEFORE running checkpoint_mem:

0
0

and AFTER:

121634816
97332224

This is with torch.__version__ 1.11.0+cu113 on Google colab.

Does torch.load leak memory? How can I get the GPU memory completely cleared?

talonmies
  • 70,661
  • 34
  • 192
  • 269
dfrankow
  • 20,191
  • 41
  • 152
  • 214

2 Answers2

3

It probably doesn't. Also, it depends on what you call memory leak. In this case, after the program ends all memory should be freed, python has a garbage collector, so it might not happen immediately (your del or after leaving the scope) like it does in C++ or similar languages with RAII.

del

  1. del is called by Python and only removes the reference (same as when the object goes out of scope in your function).
  2. torch.nn.Module does not implement del, hence its reference is simply removed.
  3. All of the elements within torch.nn.Module have their references removed recursively (so for each CUDA torch.Tensor instance their __del__ is called).
  4. del on each tensor is a call to release memory

More about __del__

Caching allocator

Another thing - caching allocator occupies part of the memory so it doesn't have to rival other apps in need of CUDA when you are going to use it.

Also, I assume PyTorch is loaded lazily, hence you get 0 MB used at the very beginning, but AFAIK PyTorch itself, during startup, reserves some part of CUDA memory.

The short story is given here, longer one here in case you didn’t see it already.

Possible experiments

  • You may try to run time.sleep(5) after your function and measure afterwards.
  • You can get snapshot of the allocator state via torch.cuda.memory_snapshot to get more info about allocator’s reserved memory and inner workings.
  • You might set the environment variable PYTORCH_NO_CUDA_MEMORY_CACHING=1 and see whether and if anything changes.

Disclaimer

Not a CUDA expert by any means, so someone with more insight could probably expand (and/or correct) my current understanding as I am sure way more things happen under the hood.

Szymon Maszke
  • 22,747
  • 4
  • 43
  • 83
2

It is not possible, see here for the same question and the response from PyTorch developer: https://github.com/pytorch/pytorch/issues/37664

Oren
  • 4,711
  • 4
  • 37
  • 63