13

I have a host with a resource of 8 cores / 16 GB RAM. We use cgroup to allocate CPU and memory for our custom application. We tried to create a static partition resource between our custom application and docker. For example, we are trying to allocate the following :-

4 CPU cores / 8 GB RAM --> docker
3 CPU cores / 6 GB RAM --> custom_app_1

the remaining for OS

We have manage to perform the segregation for custom_app_1. Question is how I create a default limit memory and cpu to our container without having to use the flag --memory or --cpus for individual container. I don't need to limit each container but I need to make sure that all containers running in the host cannot exceed the usage of 8GB RAM and 4 CPU cores, otherwise, it will be fighting resources with my custom_app_1

When i perform docker stats, each container is seeing 16 GB RAM, how do I configure so that they only see 8 GB RAM and 4 CPU cores instead

jlim
  • 909
  • 2
  • 12
  • 24
  • why don't you control docker daemon resources? https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html. I am assuming that it would limit overall containers launched. But I am not 100% on it – Tarun Lalwani Sep 25 '17 at 18:15
  • I have added `MemoryMax=1G` into `/usr/lib/systemd/system/docker.service` file but still getting to see the container is taking entire host memory. I use `docker stats` to check the memory usage and limit. I would appreciate if some one can guide me on setting `cgroupfs` or to limit `dockerd` to use all the memory for the containers – jlim Oct 04 '17 at 04:45

1 Answers1

20

So what you need to do is create a SystemD slice for the memory.

# /etc/systemd/system/limit-docker-memory.slice
[Unit]
Description=Slice with MemoryLimit=8G for docker
Before=slices.target

[Slice]
MemoryAccounting=true
MemoryLimit=8G

Then configure that slice in /etc/docker/daemon.json

{
    "cgroup-parent": "limit-docker-memory.slice"
}

Reload systemctl and restart docker

systemctl daemon-reload
systemctl restart docker

See the relevant section in documentation

DEFAULT CGROUP PARENT

The --cgroup-parent option allows you to set the default cgroup parent to use for containers. If this option is not set, it defaults to /docker for fs cgroup driver and system.slice for systemd cgroup driver.

If the cgroup has a leading forward slash (/), the cgroup is created under the root cgroup, otherwise the cgroup is created under the daemon cgroup.

Assuming the daemon is running in cgroup daemoncgroup, --cgroup-parent=/foobar creates a cgroup in /sys/fs/cgroup/memory/foobar, whereas using --cgroup-parent=foobar creates the cgroup in /sys/fs/cgroup/memory/daemoncgroup/foobar

The systemd cgroup driver has different rules for --cgroup-parent. Systemd represents hierarchy by slice and the name of the slice encodes the location in the tree. So --cgroup-parent for systemd cgroups should be a slice name. A name can consist of a dash-separated series of names, which describes the path to the slice from the root slice. For example, --cgroup-parent=user-a-b.slice means the memory cgroup for the container is created in /sys/fs/cgroup/memory/user.slice/user-a.slice/user-a-b.slice/docker-.scope.

This setting can also be set per container, using the --cgroup-parent option on docker create and docker run, and takes precedence over the --cgroup-parent option on the daemon.

Community
  • 1
  • 1
Tarun Lalwani
  • 142,312
  • 9
  • 204
  • 265
  • Thank you. That seems to work fine! The daemon/dockerd correctly places container scopes inside given slices. But one question, when i perform `docker stats`, I still see that it is utilizing the entire host memory. But when i issue `systemd-cgls`, i do see the container that spin up falls in the parent group slice i have created. Unfortunately from `docker stats` standpoint, it does not show it is using the limit. Unless i'm missing something. Although the parrent cgroup has memory limit, how do i really confirm it is actually utilizing that much and not go beyond ? – jlim Oct 07 '17 at 02:58
  • 1
    You can try and test is using this https://unix.stackexchange.com/questions/99334/how-to-fill-90-of-the-free-memory – Tarun Lalwani Oct 07 '17 at 05:53
  • Thanks, the `stress` helps – jlim Oct 08 '17 at 17:49
  • Sorry to be bothering you, but the `/etc/docker/daemon.json` is broken as is, there shouldn't be a coma in it – Elouan Keryell-Even Oct 05 '18 at 16:00
  • 1
    Is there a way to do this in the case where `systemctl: command not found`? I think Ubuntu 14.04 uses Upstart instead of Systemd. – dnk8n Nov 07 '18 at 09:16
  • @DeanKayton, you can either replace upstart with systemd, that is possible for sure. Not sure if upstart supports memory limiting, you will have to check that out – Tarun Lalwani Nov 07 '18 at 09:24
  • You cannot replace upstart with systemd on 14.04... upstart supports memory limits but it does not do cgroups. I am not sure why your original answer involved systemd at all. It should be possible to configure the memory cgroup limits with docker itself. – CameronNemo Dec 03 '18 at 00:37
  • @CameronNemo, yep you are right, I think you can only do it from Ubuntu 15.04+. For the cgroup setting this is to limit the docker itself within a cgroup instead of just the containers – Tarun Lalwani Dec 04 '18 at 05:55
  • Thanks a bunch for this! I was able to throttle my Fedora workstation's CPU usage with [CPUQuota](https://gist.github.com/jpcaparas/95d6f81e70e7490713e60b2a484c32a4) (inspired by your answer). Was able to see throttling with `htop` and running the `while true; do true; done` command inside the container shell. I can see that it doesn't hover over 60%. – jpcaparas May 12 '19 at 10:45
  • More details on the Slice schema at https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html – deepelement Oct 12 '19 at 00:05
  • Today I learned that in Ubuntu 18.04, the name `limit-docker-memory.slice` implies a Slice hierarchy. In order to correctly set the `cgroup-parent`, the value should be `/limit.slice/limit-docker.slice/limit-docker-memory.slice`. – Genzer May 06 '20 at 11:29