7

If cudaFree() is not used in the end, will the memory being used automatically get free, after the application/kernel function using it exits?

2 Answers2

13

Yes.

When your application terminates (be it gracefully or not), all of its memory is reclaimed back by the OS, regardless of whether it had freed it or not. Similarly, the memory allocated on the GPU is managed by its driver, which will release all the resources your application held, cudaFreed or not.

It is however good practice that every allocation has a matching deallocation, so don't use that as an excuse to not deallocate your memory properly :)

user703016
  • 37,307
  • 8
  • 87
  • 112
  • Thank you for the answer, Gregor :) But if release of resources is automatically done by GPU drivers after application ends, why increase the work-load of the programmer(one more thing to remember and do) and length of code(which means harder to write and debug) by suggesting de-allocation as a good practice? –  Oct 16 '15 at 16:23
  • 2
    Hi @Buzz, that's a good question and the answer isn't trivial, so you may as well ask it as a separate question. Simply put, in general every allocation should have a matching deallocation. Skipping deallocation in a long-running process would result in a memory leak followed by a crash as the system runs out of memory. Skipping deallocation in a short-lived process or during application shutdown can be considered an optimization. – user703016 Oct 18 '15 at 10:52
0

As far as I understand, if you do not free memory in a loop, or if you claim memory multiple times, the claimed memory will accumulate. Eventually the system will run out of memory.

Z-Jiang
  • 189
  • 2
  • 10