4

I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).

If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:

1. Run Wildfly in Docker with a memory limit of 300M:

$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final

verify Memory usage:

$ docker stats
CONTAINER ID        NAME                 CPU %        MEM USAGE / LIMIT     MEM %       NET I/O       BLOCK I/O     PIDS
515e549bc01f        java-wildfly-test    0.14%        219MiB / 300MiB       73.00%      906B / 0B     0B / 0B       43

As expected the container will NOT exceed the memory limit of 300M.

2. Run Wildfly in Kubernetes with a memory limit of 300M:

Now I start the same container within kubernetes.

$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'" 

verify memory usage:

$ kubectl top pod java-wildfly-test
NAME                CPU(cores)   MEMORY(bytes)   
java-wildfly-test   1089m        441Mi 

The memory limit of 300M is totally ignored and exceeded immediately.

Why does this happen? Both tests can be performed on the same machine.

Answer

The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.

Ralph
  • 4,500
  • 9
  • 48
  • 87
  • I suspect that Docker and Kubelet run with different cgroup drivers. How can I check this? – Ralph Oct 28 '20 at 22:54
  • What distribution of kubernetes are you running? – Matt Oct 29 '20 at 02:06
  • I installed kubernetes on self hosted nodes running on Debian Buster. – Ralph Oct 29 '20 at 08:12
  • which docker version are you using? – acid_fuji Oct 29 '20 at 13:02
  • I am using docker version 19.03.12. But now I assume my problem is that docker and kubelet are using different cgroupDriver. Can this be the root of the strange behavior? – Ralph Oct 29 '20 at 14:42
  • 1
    I added my own answer. The reason was a wrong output from kube-prometheus. Using kubernetes metric-server solved the problem. – Ralph Oct 29 '20 at 22:28

1 Answers1

2

I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.

The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubectl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.

Ralph
  • 4,500
  • 9
  • 48
  • 87
acid_fuji
  • 6,287
  • 7
  • 22