we're running several containers on a single docker host, mainly to run R and Python apps for data analysis. So when I load a big table into one of the containers, its memory footprint on the docker host increases.
However, when I close the Jupyter Notebook or R session, the container's memory footprint appears to remain unchanged on the host. It seems that the memory consumption of a docker container can only go up, and not down.
So I know that Linux in general occupies memory which is not needed by other applications (stuff is cached). However, how is this dealt with in the case of Docker containers? From the individual containers' perspectives there is a lot of memory (we don't want to limit the memory available to containers), and even if it is not needed inside this particular container, it would remain "occupied" in the container, and therefore inaccessible by other containers. And the host doesn't know if this memory is really needed or simply used for caching.
So how is this dealt with? I can imagine a situation where several people have started containers in which they have loaded or generated big data sets, but this was only temporary, and now the host's memory is all occupied because the memory is not freed.
I'm pretty sure that this is not how it works, so can someone explain this to me, please?
Many thanks,
Enno