I have a kubernetes cluster with 16Gb RAM on each node
And a typical dotnet core webapi application
I tried to configure limits like here:
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
But my app believe that can use 16Gb
Because cat /proc/meminfo | head -n 1
returns MemTotal: 16635172 kB
(or may be something from cgroups, I'm not sure)
So.. may be limit does not work?
No! K8s successfully kills my pod when it's reaches memory limit
.net core have interesting mode of GC, more details here. And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory. And unlimited pods could get all host memory. But with limits - they will die.
Now I see two ways:
- Use GC Workstation
- Use limits and k8s readness probe: handler will checks current memory on each iteration and call GC.Collect() if current used memory near 80% (I'll pass limit by env variable)
How to limit memory size for .net core application in pod of kubernetes?
How to rightly set limits of memory for pods in kubernetes?