5

I have a PHP daemon script downloading remote images and storing them local temporary before uploading to object storage.

PHP internal memory usage remains stable but the memory usage reported by Docker/Kubernetes keeps increasing.

I'm not sure if this is related to PHP, Docker or expected Linux behavior.

Example to reproduce the issue:

Docker image: php:7.2.2-apache

<?php
for ($i = 0; $i < 100000; $i++) {
    $fp = fopen('/tmp/' . $i, 'w+');
    fclose($fp);

    unlink('/tmp/' . $i);

    unset($fp);
}

Calling free -m inside container before executing the above script:

          total        used        free      shared  buff/cache   available
Mem:           3929        2276         139          38        1513        1311
Swap:          1023         167         856

And after executing the script:

          total        used        free      shared  buff/cache   available
Mem:           3929        2277         155          38        1496        1310
Swap:          1023         167         856

Apperantly the memory is released but calling docker stats php-apache from host indicate something other:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
ccc19719078f        php-apache          0.00%               222.1MiB / 3.837GiB   5.65%               1.21kB / 0B         1.02MB / 4.1kB      7

The initial memory usage reported by docker stats php-apache was 16.04MiB.

What is the explanation? How do I free the memory?

Having this contianer running in a Kubernetes cluster with resource limits causes the pod to fail and restart repeatedly.

mpskovvang
  • 2,083
  • 2
  • 14
  • 19
  • Memory gets used by processes and the kernel. Find the process using all your memory if it's not PHP. – tadman Mar 07 '18 at 18:38

2 Answers2

6

Yes, a similar issue has been reported here.

Here's the answer of coolljt0725, one of the contributors, answering why a RES column in top output shows something different, than docker stats (I'm just gonna quote him as is):

If I understand correctly, the memory usage in docker stats is exactly read from containers's memory cgroup, you can see the value is the same with 490270720 which you read from cat /sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff100981d9a69/memory.usage_in_bytes, and the limit is also the memory cgroup limit which is set by -m when you create container. The statistics of RES and memory cgroup are different, the RES does not take caches into account, but the memory cgroup does, that's why MEM USAGE in docker stats is much more than RES in top

What a user suggested here might actually help you to see the real memory consumption:

Try set the param of docker run --memory,then check your /sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes It should be right.

--memory or -m is described here:

-m, --memory="" - Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit can be one of b, k, m, or g. Minimum is 4M.

And now how to avoid the unnecessary memory consumption. Just as you posted, unlinking a file in PHP does not necessary drop memory cache immediately. Instead, running the Docker container in privileged mode (with --privileged flag) it is then possible to call echo 3 > /proc/sys/vm/drop_caches or sync && sysctl -w vm.drop_caches=3 periodcally to clear the memory pagecache.

And as a bonus, using fopen('php://temp', 'w+') and storing the file temporary in memory avoids the entire issue.

Kevin Kopf
  • 13,327
  • 14
  • 49
  • 66
  • Thanks for the answer and the references! So that explains the differences in memory usage. Unfortunately, cgroup is the most percise in this case. The file operations from the PHP script increases the buffer/cache and the container isn't able to release it. Only the host can call `echo 3 > /proc/sys/vm/drop_caches`. Do you know how can I avoid OOM when accessing the filesystem from the container? – mpskovvang Mar 07 '18 at 21:33
  • So, why `fopen('php://temp', 'w+')` works like a charm, but `fopen('/tmp/' . $i, 'w+')` do not free memory? What`s the reason? @mpskovvang @Alex Karshin – Q-bart Oct 30 '19 at 17:07
  • 1
    @Q-bart you can read it in the docs: `php://temp will use a temporary file once the amount of data stored hits a predefined limit (the default is 2 MB)`. – Kevin Kopf Oct 30 '19 at 18:35
  • Got it thanks. Actually I have slightly a bit another problem: a docker container does not free memory from a process that writes huge amounts of data into a file (logger), so I'm interested why it`s possible and how to fix it in my case – Q-bart Oct 30 '19 at 21:15
  • @Q-bart you should open a separate question then – Kevin Kopf Oct 31 '19 at 12:14
1

The issues referred by Alex explained the memory usage difference between free -m inside the container and docker stats from host. Buffer/cache is included in the latter.

Unlinking a file in PHP does not necessary drop memory cache immediately.

Instead, running the Docker container in privileged mode I was able to call echo 3 > /proc/sys/vm/drop_caches periodcally to clear the memory pagecache.

mpskovvang
  • 2,083
  • 2
  • 14
  • 19
  • Should I merge your answer into mine to make a combined and complete one? – Kevin Kopf Mar 08 '18 at 09:04
  • Yes please, that would be great with a detailed and complete answer. I also found a prettier command `sync && sysctl -w vm.drop_caches=3`. As a bonus info I found that using `fopen('php://temp', 'w+')` and storing the file temporary in memory avoids the entire issue. No unrelased memory hanging. – mpskovvang Mar 08 '18 at 09:49
  • I edited my answer. Feel free to edit it if I missed something – Kevin Kopf Mar 08 '18 at 10:04