12

I have a kubernetes cluster with 16Gb RAM on each node

And a typical dotnet core webapi application

I tried to configure limits like here:

apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 512Mi
    defaultRequest:
      memory: 256Mi
    type: Container

But my app believe that can use 16Gb

Because cat /proc/meminfo | head -n 1 returns MemTotal: 16635172 kB (or may be something from cgroups, I'm not sure)

So.. may be limit does not work?

No! K8s successfully kills my pod when it's reaches memory limit

.net core have interesting mode of GC, more details here. And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory. And unlimited pods could get all host memory. But with limits - they will die.

Now I see two ways:

  1. Use GC Workstation
  2. Use limits and k8s readness probe: handler will checks current memory on each iteration and call GC.Collect() if current used memory near 80% (I'll pass limit by env variable)

How to limit memory size for .net core application in pod of kubernetes?

How to rightly set limits of memory for pods in kubernetes?

  • Do you have proof that the .NET application sees the wrong info? As you mentioned, *cgroups* are used which are applied to specific processes, and might not have been applied to your bash/cat command you used to get the memory info (via `/proc/meminfo`). In order to prove your theory, I'd print out the value of [GC.GetGCMemoryInfo](https://learn.microsoft.com/en-us/dotnet/api/system.gc.getgcmemoryinfo?view=net-7.0) from your .NET app. – Ohad Schneider Jan 16 '23 at 19:27
  • As you can see here .NET is well aware of cgroups for this reason exactly: https://github.com/dotnet/designs/blob/main/accepted/2019/support-for-memory-limits.md. Related article explaining how k8s inegrates with CRI and cgroups: https://medium.com/@kkwriting/kubernetes-resource-limits-and-kernel-cgroups-337625bab87d – Ohad Schneider Jan 16 '23 at 20:05
  • If I remember correctly I managed the issue by the env var COMPlus_GCHeapHardLimit, by the documentation https://learn.microsoft.com/en-us/dotnet/api/system.gcmemoryinfo.totalavailablememorybytes?view=net-7.0&viewFallbackFrom=netcore-2.1 – SanŚ́́́́Ý́́́́Ś́́́́ May 16 '23 at 10:53

3 Answers3

7

You should switch to Workstation GC for optimizing to lower memory usage. The readiness probe is not meant for checking memory

In order to properly configure the resource limits you should test your application on a single pod under heavy loads and monitor(e.g. Prometheus & Grafana) the usage. For a more in-depth details see this blog post. If you haven't deployed a monitor stack you can at least use kubectl top pods.

If you have found out the breaking points of a single pod you can add the limits to the specific pod like this example below (see Kubernetes Documentation for more examples and details)

apiVersion: v1
kind: Pod
metadata:
  name: exmple-pod
spec:
  containers:
  - name: net-core-app
    image: net-code-image
    resources:
      requests:
        memory: 64Mi
        cpu: 250m
      limits:
        memory: 128Mi
        cpu: 500m

The readiness probe is actually meant to be used to tell when a Pod is ready in first place. I guess you thought of the liveness probe but that wouldn't be adequate usage because Kubernetes will kill the Pod when it's exceeding it's resource limit and reschedule.

tomaaron
  • 135
  • 6
  • But I can use readness probe for call GC.Collect() if memory usage more, than some custom limit ) – SanŚ́́́́Ý́́́́Ś́́́́ Apr 08 '19 at 04:43
  • Yes, you _could_ but reading this [stackoverflow question](https://softwareengineering.stackexchange.com/questions/276585/when-is-it-a-good-idea-to-force-garbage-collection) you really shouldn't do it. I would recommend you to get more insights about your memory usage and possible leaks in order to set the limits properly. – tomaaron Apr 10 '19 at 09:57
  • Environment variable `COMPlus_GCHeapHardLimit` looks like a solution, but only for .net core 3 https://learn.microsoft.com/en-us/dotnet/api/system.gcmemoryinfo.totalavailablememorybytes?view=netcore-3.0 – SanŚ́́́́Ý́́́́Ś́́́́ Sep 15 '19 at 07:56
  • Workstation GC doesn't seem to fix it in our case. Memory consumption does grow more slowly, but eventually we'll get an OOM unless we force a GC. Also, it affects performance. – Martin Ferrari Feb 11 '21 at 07:10
  • 1
    @OhadSchneider We've implemented the accepted answer of using the COMPlus_GCHeapHardLimit environmental variable. This is a known problem of .net core 3.1 with Kubernetes. Basically .net processes see all of the node's memory as available, instead of just the pod's configured memory. Theoretically, it's fixed in a later version, I don't remember if .net 5 or 6, so later versions of .net should work fine, but I haven't confirmed it since we left the environmental variable for all pods. – Martin Ferrari Jan 17 '23 at 20:00
3

Use the environment variable COMPlus_GCHeapHardLimit

Documentation https://learn.microsoft.com/en-us/dotnet/api/system.gcmemoryinfo.totalavailablememorybytes?view=net-5.0

And notice: you should use heximal values

It means the value 10000000 is 256MB!

  • This is almost certainly not the right way to do it. If you search the web / GitHub you'll see it is *very* rarely used. If the behavior was indeed as you theorized above, I would have expected this to be *extremely* common (as it would have basically meant that GC is broken in all k8s applications). – Ohad Schneider Jan 16 '23 at 19:36
  • https://stackoverflow.com/questions/55549129/how-to-limit-memory-size-for-net-core-application-in-pod-of-kubernetes/68828374?noredirect=1#comment132617475_55554193 – SanŚ́́́́Ý́́́́Ś́́́́ Feb 23 '23 at 13:46
1

I used docker run command arguments, which can be passed via the deployment yaml, to specify the memory size of the container:

args:
  - "--memory=124m --memory-swap=124m"

This way the .net GC 'sees' that only 124MB is available.

The args specifier is on the same level as ports and name under the containers specifier:

  containers:
    - name: xxx
      ports:
      ....
      args:
        - "--memory=124m --memory-swap=124m"

A description about the arguments '--memory' and '--memory-swap' can be found here https://docs.docker.com/config/containers/resource_constraints/

-m or --memory= The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 6m (6 megabyte).

--memory-swap* The amount of memory this container is allowed to swap to disk. See --memory-swap details.

More details about passing arguments to the run command can be found here: How to pass docker run flags via kubernetes pod

Patrick Koorevaar
  • 1,219
  • 15
  • 24
  • From what I can tell, these are the args that will be passed to your application, NOT to the actual container runtime. In other words, I think this would translate into something like `docker run myapp --memory=124m --memory-swap=124m` where you would have wanted `docker run --memory=124m --memory-swap=124m myapp`. It wouldn't make much sense for k8s to use docker-specific arguments anyway, as it needs to support all OCI runtimes (for example `containerd`). – Ohad Schneider Jan 16 '23 at 19:33