0

I have read this question and answer dynamically allocated memory after program termination, and I want to know if it is okay to NOT delete dynamically allocated memory and let it get freed by the OS after programme termination. So, if I have allocated some memory for objects that I need thoughout the programme, is it ok to skip deleting them at the end of the programme, in order to make the code run faster?

Community
  • 1
  • 1
pkdc
  • 173
  • 2
  • 13
  • 1
    It is not a good practice and may result in unexpected behavior when large chunks of memory without getting deleted. If you don't want to manage your self, then use boost::unique_ptr. It will get deleted by itself. – Satish Chalasani Jul 22 '15 at 15:48
  • 5
    Sounds like you would be happier with a garbage collected language, but you are probably using dynamic memory allocation when you really just don't need to. – crashmstr Jul 22 '15 at 15:49
  • @crashmstr : Indeed. – gd1 Jul 22 '15 at 15:52
  • Does it really take that much more time to free your dynamic allocations? – Blake McConnell Jul 22 '15 at 15:57
  • 1
    I am just curious, thanks for answering! So it is bad practice not freeing the dynamically allocated memory. I will read about the shared_ptr and unique_ptr. – pkdc Jul 22 '15 at 16:02
  • @SatishChalasani: Why would it be *undefined behaviour*? And why `boost::unique_ptr` and not `std::unique_ptr`? – Christian Hackl Jul 22 '15 at 18:18
  • @ChristianHackl if the dynamic allocation is within a iterating loop and each time a large size is allocated and if the iteration is done many times, system may run into swap and may run out of memory. It is std::unique_ptr. Depending on the implementation, system could go on swap or if swapping is not allowed then it could just terminate.. – Satish Chalasani Jul 23 '15 at 01:23
  • if operating system should allow for allocated memory not being returned to system, you'll end with a out of resources in your operating system sooner (than later). Of course you can leave memory allocated. Operating System will do it for you. – Luis Colorado Jul 23 '15 at 07:55

3 Answers3

9

The short answer is yes, you can, the long answer is possibly you better not do that: if your code needs to be refactored and turned into a library, you are delivering a considerable amount of technical debt to the person who is going to do that job, which could be you.

Furthermore, if you have a real, hard-to-find memory leak (not a memory leak caused by you intentionally not freeing long-living objects) it's going to be quite time consuming to debug it with valgrind due to a considerable amount of noise and false positives.

Have a look at std::shared_ptr and std::unique_ptr. The latter has no overhead.

gd1
  • 11,300
  • 7
  • 49
  • 88
3

Most sane OS release all memory and local resources own by the process upon termination (the OS may do it in lazy manner, or reduce their share counter, but it does not matter much on the question). So, it is safe to skip releasing those resources.

However, it is very bad habit, and you gain almost nothing. If you found releasing object takes long time (like walking in a very long list of objects), you should refine your code and choose a better algorithm.

Also, although the OS will release all local resources, there are exceptions like shared memory and global space semaphore, which you are required to release them explicitly.

Non-maskable Interrupt
  • 3,841
  • 1
  • 19
  • 26
0

First of all, the number one rule in C++ is:

Avoid dynamic allocation unless you really need it!

Don't use new lightly; not even if it's safely wrapped in std::make_unique or std::make_shared. The standard way to create an instance of a type in C++ is:

T t;

In C++, you need dynamic allocation only if an object should outlive the scope in which it was originally created.

If and only if you need to dynamically allocate an object, consider using std::shared_ptr or std::unique_ptr. Those will deallocate automatically when the object is no longer needed.

Second,

is it ok to skip deleting them at the end of the programme, in order to make the code run faster?

Absolutely not, because of the "in order to make the code run faster" part. This would be premature optimisation.


Those are the basic points.

However, you still have to consider what constitutes a "real", or bad memory leak.

Here is a bad memory leak:

#include <iostream>

int main()
{
    int count;
    std::cin >> count;
    for (int i = 0; i < count; ++i)
    {
        int* memory = new int[100];
    }
}

This is not bad because the memory is "lost forever"; any remotely modern operating system will clean everything up for you once the process has gone (see Kerrek SB's answer in your linked question).

It is bad because memory consumption is not constant when it could be; it will unnecessarily grow with user input.

Here is another bad memory leak:

void OnButtonClicked()
{
    std::string* s = new std::string("my"); // evil!
    label->SetText(*s + " label");
}

This piece of (imaginary and slightly contrived) code will make memory consumption grow with every button click. The longer the program runs, the more memory it will take.

Now compare this with:

int main()
{
    int* memory = new memory[100];
}

In this case, memory consumption is constant; it does not depend on user input and will not become bigger the longer the program runs. While stupid for such a tiny test program, there are situations in C++ where deliberately not deallocating makes sense.

Singleton comes to mind. A very good way to implement Singleton in C++ is to create the instance dynamically and never delete it; this avoids all order-of-destruction issues (e.g. SettingsManager writing to Log in its destructor when Log was already destroyed). When the operating system clears the memory, no more code is executed and you are safe.

Chances are that you will never run into a situation where it's a good idea to avoid deallocation. But be wary of "always" and "never" rules in software engineering, especially in C++. Good memory management is much harder than matching every new with a delete.

Christian Hackl
  • 27,051
  • 3
  • 32
  • 62
  • What!!! avoid using dynamic memory? what the hell are you recommending here? The only reason you can have to do that is not knowing how to use it. Java is completely based on dynamic memory. What are you talking about? Cannot do but downvote you, sorry. – Luis Colorado Jul 23 '15 at 07:58
  • @LuisColorado: Java is a different language. You are arguing against a well-established, universally accepted C++ guideline here. Do you have authorative sources for your claims? – Christian Hackl Jul 23 '15 at 09:11
  • The 'unless you really need it' is indeed quite common. Let alone the lifecycle and ownership, the stack space is limited, if you have large piece of data you eventually need to resolve dynamic allocation, or have someone else did it for you in some kind of collection. – Non-maskable Interrupt Jul 23 '15 at 09:32
  • @Calvin: Well, obviously when I argue against unnecessary dynamic allocation, I do not intend avoiding std::vector et al. I argue against the beginners mistake of newing everything in C++ just because Java has garbage collection :) Smart pointers mitigate this problem somewhat, but do not solve it. – Christian Hackl Jul 23 '15 at 09:46
  • @Calvin: How often you need to allocate dynamically depends mostly on your application domain. In any case, it should be a concious choice, not the default. – Christian Hackl Jul 23 '15 at 09:53
  • I see, that make sense. – Non-maskable Interrupt Jul 23 '15 at 09:53
  • @ChristianHackl, you are arguing about not using dynamic memory in C++ .... It's like telling people not to use the `+` operator in programs. I have authoritative references on using dinamyc memory in **K&R** and **Stroustrup**. Not using dynamic memory is like recommending not using arithmetic at all. – Luis Colorado Jul 23 '15 at 21:03
  • @LuisColorado: You missed the "unless you really need it" part, which pretty much invalidates all your comments because they are based on a false premise. – Christian Hackl Jul 24 '15 at 06:04
  • @LuisColorado: speaking of Stroustrup, you might want to read his C++ FAQ on "How do I deal with memory leaks". Then read again my answer and explain how what I recommend differs from his answer. http://www.stroustrup.com/bs_faq2.html#memory-leaks – Christian Hackl Jul 24 '15 at 06:11
  • @ChristianHackl, memory leaks is not what we are discussing. I don't say dynamic memory is not subject to this kind of problems.... I'm talking about your assert of avoiding as much as possible (unconditionally) using dynamic memory. – Luis Colorado Jul 24 '15 at 13:05
  • @LuisColorado: How you equate "as much as possible" with "unconditionally" is beyond me. Yes, it is true that you should avoid dynamic allocation in C++ **as much as possible**. No, it is nonsense that you should **unconditionally** avoid it. Because "as much as possible" implies that there exists a "not possible". That "not possible" is the case when an object needs to outlive its declaration scope and cannot be copied. You then have to allocate it dynamically. Your comment is either a straw man or a language problem. – Christian Hackl Jul 24 '15 at 14:07