22

I have an application in which a lot of memory leaks are present. For example if a open a view and close it 10 times my memory consumption rises becauses the views are not completely cleaned up. These are my memory leaks. From a testdriven perspective i would like to write a test proving my leaks and (after I fixed the leak) asserting I fixed it. That way my code won't be broken later on. So in short:

Is there a way to assert that my code is not leaking memory from a unit test?

e.g. Can i do something like this:

objectsThatShouldNotBeThereCount = MemAssertion.GetObjects<MyView>().Count;
Assert.AreEqual(0, objectsThatShouldNotBeThereCount);

I am not interested in profiling. I use Ants profiler (which I like a lot) but would also like to write tests to make sure the 'leaks' don't come back

I am using C# / Nunit but am interesed in anyone having a philosophy on this...

Jack Ukleja
  • 13,061
  • 11
  • 72
  • 113
Gluip
  • 2,917
  • 4
  • 37
  • 46

6 Answers6

12

Often memory leaks are introduced when managed types use unmanaged resources without due care.

A classic example of this is the System.Threading.Timer which takes a callback method as a parameter. Because the timer ultimately uses an unmanaged resource a new GC root is introduced which can only be released by calling the timer's Dispose method. In this case your type should also implement IDisposable otherwise this object can never be garbage collected (a leak).

You can write a unit test for this scenario by doing something similar to this:

var instance = new MyType();

// ...
// Use your instance in all the ways that
// may trigger creation of new GC roots
// ...

var weakRef = new WeakReference(instance);

instance.Dispose();
instance = null;

GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Assert.IsFalse(weakRef.IsAlive);
Alexander
  • 2,320
  • 2
  • 25
  • 33
Jack Ukleja
  • 13,061
  • 11
  • 72
  • 113
  • Note that this will work in DotNet framework, but not with Mono because of a quirk in their GC implementation. To make this work in mono, create the WeakReference in a separate method. See here: https://stackoverflow.com/questions/11417283/strange-weakreference-behavior-on-mono – tzachs Mar 12 '18 at 18:50
5

That memory consumption increases is not necessarily an indication of a resource leak, since garbage collection is non deterministic and may not have kicked in yet. Even though you "let go" of objects, the CLR is free to keep them around as long as it deems enough resources are available on the system.

If you know you do in fact have a resource leak, you may work with objects that have explicit Close/Dispose as part of their contract (meant for "using ..." constructs). In that case, if you have control over the types, you can flag disposal on the objects from their Dispose implementation, to verify that they have in fact been disposed, if you can live with lifecycle management leaking into the type's interface.

If you do the latter, it is possible to unit test that contractual disposal takes place. I've done that on some occasions, using an application specific equivalent to IDisposable (extending that interface), adding the option for querying whether the object has been disposed. If you implement that interface explicitly on your type, it won't pollute its interface as much.

If you have no control over the types in question, a memory profiler, as suggested elsewhere, is the tool you need. (For instance dotTrace from Jetbrains.)

Cumbayah
  • 4,415
  • 1
  • 25
  • 32
  • You mean I could test my Dispose is getting called for specific objects. That would be good start although that would be a very specific test. – Gluip Sep 06 '10 at 14:14
  • It's hard to put in test of contractual dispose after the fact. Those tests belong in the unit tests for the application itself. Another method I've used to pinpoint violations of contractual dispose after the fact is to Systems.Diagnostics.Assert fail from the type's destructor (caveats apply!), if the IsDisposed flag was not set. This tells you (at garbage collection time) that it happens, but not how. However, if combined with keeping a StackTrace snapshot from the object's instantiation time, you can find who instantiated it and backtrack to why it's not disposed. – Cumbayah Sep 06 '10 at 14:30
1

You don't need unit tests you need memory profiler. You can start with CLR Profiler.

Ladislav Mrnka
  • 360,892
  • 59
  • 660
  • 670
  • 8
    I am already using a profiler but would like to 'pin' my results so a new leak is easily created for the same scenario – Gluip Sep 06 '10 at 14:11
1

dotMemory Unit framework has capabilities to programmatically check amount of certain objects allocated, memory traffic, make and compare memory snapshots.

stop-cran
  • 4,229
  • 2
  • 30
  • 47
0

You might be able to hook into the profiling API but it looks like you would have to start your unit tests up with profiler enabled.

How are the objects being created? Directly or some way that can be controlled. If controllable return extended versions with finalizers which register that they have been disposed. Then

GC.Collect();
GC.WaitForPendingFinalizers();
Assert.IsTrue(HasAllOfTypeXBeenFinalized());
Michael Lloyd Lee mlk
  • 14,561
  • 3
  • 44
  • 81
  • good idea. Unfortunately i create my objects directly so i can't wrap them with extra functionality just for test. – Gluip Sep 06 '10 at 17:01
0

How about something like:

long originalByteCount = GC.GetTotalMemory(true);
SomeOperationThatMayLeakMemory();
long finalByteCount = GC.GetTotalMemory(true);
Assert.AreEqual(originalByteCount, finalByteCount);
Ergwun
  • 12,579
  • 7
  • 56
  • 83
  • I just tried it, this does not work. As far as I can tell the trace of nunit adds noise to that, so an operation that should be neutral suddently is not. – Johannes Sep 08 '16 at 12:30