0

I need to use the ./cgroup/cpu tool to limit the CPU usage of a particular process. At present, I have achieved this, but every time I have to start the process first, then get the PID of the process, and then write the PID to the tasks file.

Considering this scenario, I need to use gzip tool periodically to compress some data. Every time gzip is started, its process PID is incremented, and compression usually ends quickly. In this case, it is very tedious to write gzip process PID to ./cgroup/cpu/mytestnode/tasks every time.Is there a better way? For example, write the process name into tasks but not PID.

ZH.sd
  • 83
  • 1
  • 9
  • This should help https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/starting_a_process – Tarun Lalwani Mar 16 '21 at 03:55

1 Answers1

1

You can use docker to manage the resources for you in a more higher level. Docker allows you to limit the container to a cpu set. Take this example of limiting all of the processes in the container to the CPUs 2,3 and 5, and verifying that the cgroup is indeed limited:

~$ docker run -ti --rm --cpuset-cpus=2,3,5 ubuntu:20.04 more /sys/fs/cgroup/cpuset/cpuset.cpus
2-3,5

So to run gzip and compress somefile on the in your current folder, running only on CPU 1, you can start docker as follows:

~$ docker run -ti --rm --cpuset-cpus=1 -v $(pwd):/work ubuntu:20.04 gzip /work/somefile

In addition to CPU stickiness, Docker will allow you to mange the CPU quota. Following is the same example, but limiting to only 1% of the CPU=1

docker run -ti --rm --cpuset-cpus=1 --cpus=0.01 -v $(pwd):/work ubuntu:20.04 gzip /work/somefile

You can see other options of controlling the resources in the docker documentation

jordanvrtanoski
  • 5,104
  • 1
  • 20
  • 29