I'm seeing the RAM occupation of my process on a Docker container, but it seems to generate a leak.
I did the following steps:
- Create docker without running anything and executing
docker stats [CONTAINER_ID]
with this right results:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac5cdb9d61b unruffled_margulis 0.00% 852KiB / 12.69GiB 0.01% 736B / 0B 0B / 0B 1
- Then I launched a process which waits for input on a queue (but I won't send any input to check its occupation during listening). The process allocates resources because it loads some models:
root@d6d1d82fe4c7:/app# listen.py
and these stats:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac5cdb9d61b unruffled_margulis 0.00% 4.628GiB / 12.69GiB 36.49% 8.2kB / 2.61kB 0B / 0B 11
- Then I stop the process and re-launch it in the same way:
root@d6d1d82fe4c7:/app# ^C
root@d6d1d82fe4c7:/app# listen.py
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac5cdb9d61b unruffled_margulis 0.00% 8.451GiB / 12.69GiB 66.62% 15.8kB / 5.54kB 0B / 0B 11
Incredibly the RAM occupation is double of before!!! The process was killed, but it's like the models of the previous process are still loaded in Docker.
- After have killed again the process , without re-launching it:
root@d6d1d82fe4c7:/app# ^C
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0ac5cdb9d61b unruffled_margulis 0.00% 3.825GiB / 12.69GiB 30.15% 16.3kB / 5.86kB 0B / 0B 1
Some resources are allocated without any process running. With htop
I see a different RAM usage: 800MiB, which is too much for doing nothing and different from docker stats
.
I tried to repeat this and seems that after 2 launches, the RAM blocks on 8GiB (it doesn't exceed also in the other attempts), but is this behavior normal? How to clean RAM on Docker?
EDIT
After some experiments, I tried to limit the maximum Docker memory to 7GB, in order to see the container killed
after the "first increment of RAM". But with this new configuration, the RAM was stable on 4.628GiB.
Putting the limit to 13GB again, the RAM returned to be of 8.451GiB at the second run. The curious thing is that after this increment, it seems not to increase again in the following steps. While if I load less models, in order to allocate less memory, it seems to increase memory every time I launch the script.
So my intuition is that Docker caches some resources, but if it reaches the limit of memory, it frees the cache and allocates new resources.
With the command free -m
I saw at the beginning:
root@29d5547ba8ec:/app# free -m
total used free shared buff/cache available
Mem: 12989 412 11638 400 938 11876
Swap: 1023 0 1023
and after the first launch:
root@29d5547ba8ec:/app# free -m
total used free shared buff/cache available
Mem: 12989 454 7477 400 5057 11841
Swap: 1023 0 1023
See the field buff/cache
. I don't know if this is correct