0

I am currently running a docker image on Linux, where I am supposed to compose videos together thanks to moviepy. Because I work with lots of videos, the process becomes quickly quite heavy to bear. I came to a point where it did not work anymore, with an exit code 137, but with OOM flag = false when running docker inspect <CONTAINER>.

However, I ran again my image in a container with docker stats running in background, and I saw the used memory increasing progressively towards the limit memory (15 Gb), and then it crashed. I actually see this threshold in Docker's memory in the Total Memoryparameter when running docker info. So I wonder two things :

  • Is the allocated memory equivalent to a share of my computer's ram, or from the heap memory (computer's disk size) ? Because I surely have enough hardware on this computer to provide, but as it has around 16Gb of RAM, I suppose it is ram. I just wonder then why it cannot use heap memory to store things ... Because of the VM docker creates ?
  • The only workaround I found was to separate my videos, which is more of a quick hack than a fix. Is it possible to tell a dockerfile to start and run another container once of them has maxed out memory ? (Would be only possible if it is heap memory and not ram)

It is possible that I completely miss a point in Docker's way of running things, do not hesitate to explain any mistake I'm making. Thanks for reading.

Lucien David
  • 310
  • 2
  • 9
  • 2
    "Heap memory" is RAM too, not disk, and at the OS level heap and stack aren't counted separately. Are you thinking of swap space, and do you have that configured on your system? Is the kernel OOM-killer killing your process (`dmesg` would show this)? On native Linux there isn't a VM and the container can use all available system memory. – David Maze Nov 11 '20 at 18:19
  • Well I thought containers could be built on swap space, my bad. When I divided my execution into two separate executions (separate my resources in half and build two separate videos to assemble afterwards), it worked. So it just seems that the required memory was sky-rocketting too high because of moviepy, I do not think there's a workaround for that, the process I wanted to export to other machines was maybe just too heavy. – Lucien David Dec 04 '20 at 08:58

0 Answers0