5

I would like to make sure no one is able to delete any objects from my class hierarchy other then by using a provided Destroy method.

The rationale is that any object from this hierarchy needs to take a special write mutex before it starts to destroy itself to make sure objects are not deleted while another thread is using them.

I know I could prevent this problem with reference counting but it would be a much bigger change to the system also in terms of potential performance impact and memory allocation.

Is there a way to somehow efficiently/smartly make all the destructors protected so that child classes can call their parents destructors while outsiders have to use Destroy?

One solution that is safe (ie. it will not rot) that I came up with is to make all the destructors private and declare each derived class as a friend of the base class but I'd prefer something more elegant, less manual and easier to maintain (like not requiring to modify base classes in order to derive from them).

Is there anything like this available? Maybe some smart trick that makes things "work" as I'd like?

ps. The solution I chose for now is to NOT prevent anyone from calling delete in all cases (just made it protected in the base class) but detect this situation and call abort in the base class destructor.

RnR
  • 2,096
  • 1
  • 15
  • 23
  • 5
    Wait a second, what's wrong with simply declaring the destructors as `protected`? – Andy Prowl Mar 06 '13 at 10:49
  • @AndyProwl I assume the problem is that in such case, an evil derived class can make its destructor public. – Angew is no longer proud of SO Mar 06 '13 at 10:51
  • Exactly - it could even be an honest mistake made by someone maintaining the code in a couple years - I'd prefer to not leave such traps behind :) – RnR Mar 06 '13 at 10:56
  • 2
    Instead of raping the language, you should consider solving this by unit testing, asserts and code reviews. – PlasmaHH Mar 06 '13 at 11:00
  • @PlasmaHH How do I prevent someone from making an honest mistake with unit tests? Like you see in the PS this is the direction I now took (calling abort) but it's not what I'd consider optimal (ie. I'd like to use the language feature of a protected/private destructor but for a whole class hierarchy without exceptions). – RnR Mar 06 '13 at 11:10
  • @RnR: If there would be a solution, how are you going to prevent making a honest mistake in implementing it? There is no 100% protection against mistakes, and while deploying the proven mechanisms to prevent them is a good idea, spending enormous amounts of time usually isn't since that time is lost for coding those things that the code should actually do. You already have a mechanism that will tell people when they made that mistake, is it really that important if it happens at run or compile time? – PlasmaHH Mar 06 '13 at 11:14
  • @PlasmaHH - If you have a single class and make it's destructor protected there's no way of making an honest mistake of calling it directly and this is what I'd like to achieve. Detecting it on runtime is not as good (and potentially might even leak to production in some strange way of someone not even running what they write) so I asked to see if there's a better way - I think this is what this site is for? :) – RnR Mar 06 '13 at 11:20
  • @rnr, seeing as how many people are offering other solutions to your problem, I would conclude that what you're looking for is basically not possible. It's not that people don't understand you. – Shahbaz Mar 06 '13 at 14:35
  • @RnR I have written a script that can help in the process. Please have a look at my answer. – prapin Jul 01 '14 at 20:46

4 Answers4

1

Don't try to reinvent the lifetime mechanisms provided by the language.

For an object of your class to be correctly initialised it needs to also be able to clean itself up.

In its constructor pass either the mutex, or a means of obtaining a mutex, which it can use in its destructor.

Peter Wood
  • 23,859
  • 5
  • 60
  • 99
  • Sorry but the base class destructor is called "at the end" of the destruction chain - this means I would have to remember to add something to EACH derived class destructor and that's not something I want (because again - a derived class can forget to do so etc) – RnR Mar 06 '13 at 11:08
  • I'm sorry, I don't understand. Why don't you encapsulate the things which need to take care when they clean up? What do you imagine your `Destroy` function doing, which my solution can't do? – Peter Wood Mar 06 '13 at 11:23
  • The Destroy function will take the mutex first and then proceed to calling delete on the object. Your solution (unless implemented in each derived class) will first destroy the objects until reaching the base class destructor and then take the mutex (so the object will already be corrupted/half destroyed). – RnR Mar 06 '13 at 11:33
  • Are you saying an object's `Destroy` function calls its own destructor? Not another object's? – Peter Wood Mar 06 '13 at 11:49
  • Yes - the object will basically self destruct after making sure it has the mutex. – RnR Mar 06 '13 at 12:25
  • I'm pretty sure this should be implemented elsewhere. If you use smart pointers (with a custom deleter), abstract classes, and a factory, you can enforce it externally to your class hierarchy. Does that sound too much? – Peter Wood Mar 06 '13 at 12:55
  • 1
    @RnR: Perhaps you'll find the material in [C++ FAQ 11 Destructors](http://www.parashift.com/c++-faq/dtors.html) useful for helping you arrive at a solution for your use case. As Peter suggests, you may find you'll need to reconsider your approach to work within the framework provided by C++ for managing the lifetime of objects. (But given my late comment, perhaps you've already done so.) – DavidRR Mar 14 '13 at 17:40
  • @Peter - I did consider using smart pointers as they would solve part of the issue in themselves. In this case though it would be a bigger change to the system, it would not solve all issue and it would make the objects bigger too. – RnR Mar 20 '13 at 09:42
1

Thanks for all your feedback and discussion. Yes - it proved it's impossible to do what would be my natural first choice :( (to have the "protection" of the destructor take effect in derived classes just as it's "virtuality" does).

My solution in this particular case (solving all potential problems with being able to make hones mistakes by introducing new derived classes that violate previous agreements AND keeping the solution in one place/maintainable (no code duplication in derived classes etc)) is:

  1. Make all the existing class hierarchy destructors I can find protected
  2. Provide a Destroy method in the base class that can be used to initiate the destruction of these objects - if called the method lights up a flag on the destructed object that it was properly destroyed and then calls delete on it
  3. In the base class destructor (once we get to it) check the flag - if it's not set it means someone introduced a new class and called delete on it directly or avoided the compilers protection checks in some other way (abused some friendships etc) - I abort the application in this case to make sure the issue cannot be ignored/missed
RnR
  • 2,096
  • 1
  • 15
  • 23
1

I had the same needs, but for a different reason. In our company framework, nearly all classes derive from a common BaseObject class. This object uses a reference count to determine its life time. BaseObject has in particular these three methods: retain(), release() and autorelease(), heavily inspired from Objective-C language. The operator delete is only called inside release() when the retain count reaches 0. Nobody is supposed to call delete directly, and it is also undesirable to have BaseObject instances on stack.

Therefore, all our destructors should be protected or private. To enforce this, as I know it is impossible from the language, I wrote a Perl script that looks for all destructors within a source directory and makes a report. It is then relatively easy to check that the rule is respected.

I made the script public, available here: https://gist.github.com/prapin/308a7f333d6836780fd5

prapin
  • 6,395
  • 5
  • 26
  • 44
  • Thanks. This is an interesting approach and I agree it can help especially in smaller projects where one can realisticly run this and look for the classes he's interested in knowing there are no other places where he would need to look. One interesting thing that might be a helpful addition would be to print the whole inheritance tree rather then the base class as it would allow to easily add checks and make sure all classes deriving from a given base class provide a given protection etc – RnR Jul 03 '14 at 13:24
0

It can be done with help of testing. For a class with an protected destructor you need 2 test cases:

  1. one function (in one file) which fails to compile simply creating such an object
  2. one function (in second file) which creates an object with an derived class which compiles.

If both test cases work I think you can be sure your classes are protected as you like.

I don't know wether you are able to implement it with your build system but I have an example using bjam (from boost) at git hub. The code is simple and works for gcc and msvc. If you don't know bjam you should look inte Jamroot.jam. I think it is clear without any further comment how this simple example works.

Jan Herrmann
  • 2,717
  • 17
  • 21
  • The problem here is with derived classes not being required to have their destructors protected anymore. – RnR Mar 20 '13 at 09:31