5

We are running a .NET application in fargate via terraform where we specify CPU and memory in the aws_ecs_task_definition resource.

The service has just 1 task e.g.

 resource "aws_ecs_task_definition" "test" {
   ....
   cpu                      = 256
   memory                   = 512
   ....

From the documentation this is required for Fargate.

You can also specify cpu and memory in the container_definitions, but the documentation states that the field is optional, and as we are already setting values at the task level we did not set them here.

We have observed that our memory was growing after the tasks started, depending on application, sometimes quite fast and others over a period of time.

So we starting thinking we had a memory leak and went to profile using the dotnet-monitor tool as a sidecar.

As part of introducing the sidecar we set cpu and memory values for our .NET application at the container_definitions level.

After we done this, we have observed that our memory in our applications is behaving much better.

From .NET monitor traces we are seeing that when we set memory at the container_definitions level:

  1. Working Set is much smaller
  2. Gen 0/1/2 GC Count is above 1(GC occurring early)
  3. GC 0/1/2 Size is less
  4. GC Committed Bytes is smaller

So to summarize when we do not set memory at container_definitions level, memory continues to grow and no GC occurring until we are almost running out of memory.

When we set memory at container_definitions level, GC occurring regularly and memory not spiking up.

So we have a solution, but do not understand why this is the case. Would like to know why it is so

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Noel
  • 5,037
  • 9
  • 46
  • 69
  • I wonder if the .NET runtime is picking up those container-level memory settings, and doing something different with memory allocation, or with the garbage collector, due to those values. That's the only explanation I can think of. – Mark B Jan 09 '23 at 13:36

0 Answers0