I am currently running a docker image on Linux, where I am supposed to compose videos together thanks to moviepy. Because I work with lots of videos, the process becomes quickly quite heavy to bear. I came to a point where it did not work anymore, with an exit code 137, but with OOM flag = false
when running docker inspect <CONTAINER>
.
However, I ran again my image in a container with docker stats
running in background, and I saw the used memory increasing progressively towards the limit memory (15 Gb), and then it crashed. I actually see this threshold in Docker's memory in the Total Memory
parameter when running docker info
. So I wonder two things :
- Is the allocated memory equivalent to a share of my computer's ram, or from the heap memory (computer's disk size) ? Because I surely have enough hardware on this computer to provide, but as it has around 16Gb of RAM, I suppose it is ram. I just wonder then why it cannot use heap memory to store things ... Because of the VM docker creates ?
- The only workaround I found was to separate my videos, which is more of a quick hack than a fix. Is it possible to tell a dockerfile to start and run another container once of them has maxed out memory ? (Would be only possible if it is heap memory and not ram)
It is possible that I completely miss a point in Docker's way of running things, do not hesitate to explain any mistake I'm making. Thanks for reading.