I'm starting to learn about GC and finalization, and I've come across quite a simple example where the behaviour of the application is quite unexpected to me.
(Note: I'm aware that finalizers should be used only be used with unmanaged resources and using the disposable pattern, I just want to understand what is going on here.)
This is a simple console app that generates a "saw-tooth" pattern of memory. The memory rises to around 90MB and then does a GC, drops and begins to rise again, never going beyond 90MB.
class Program
{
static void Main(string[] args)
{
for (int i = 0; i < 100000; i++)
{
MemoryWaster mw = new MemoryWaster(i);
Thread.Sleep(250);
}
}
}
public class MemoryWaster
{
long l = 0;
long[] array = new long[1000000];
public MemoryWaster(long l)
{
this.l = l;
}
//~MemoryWaster()
//{
// Console.WriteLine("Finalizer called.");
//}
}
If I remove the comment with the finalizer, the behaviour is very different - the application does one or two GCs at the start but then the memory increases in a linear way until it is using over 1GB of memory (at which point I terminate the application)
From what I have read, this is because instead of releasing the item, the GC moves the object to the finalization queue. The GC starts a thread to execute the finalizer methods, and then waits for another GC to remove the finalized objects. This can be an issue when the finalizer methods are very long-running but this isn't the case here.
If I manually trigger run GC.Collect() every few iterations, the app behaves as expected and I see the saw-tooth pattern of the memory getting released.
My question is - why does the large amount of memory being used by the application not trigger a GC automatically? In the example with finalizers included, would the GC ever run again after the first time?