0

We have a centos8 web server hosting hundreds of websites, using 3 services: apache, mariadb and php-fpm. Because apache and mariadb do not support per-account resource limitation, all websites have the same access to many system resources (storage, database) and a single website can overload the machine (often that happens through database queries) and bring down all websites hosted there.

We would like to use Linux cgroups and possibly other container features to introduce limits to resources one website can use, mostly the number of php processes and I/O operations and database queries per second.

One (not necessarily the most performant) way to achieve this is to run one dedicated group of services (apache+mariadb+php-fpm) for each website or website group, with specific configuration of resource limits. Thus, to have hundreds of simultaneously running service groups.

What is the more straightforward way to set this up from existing setup of working 3 services? Performance is secondary and we do not want to go into creating and running hundreds of container images. Instead, we would like to have everything defined in an easily runtime-modifiable form in the root filesystem and control all services through systemd. Thus, no docker containers.

Ján Lalinský
  • 282
  • 1
  • 11
  • I don't think cgroups cover such abstract metric like database queries per second. cgroup is a kernel level feature, and it only knows about things kernel is aware of: Processes, memory / CPU usage per process etc. – Tero Kilkanen Feb 07 '22 at 07:33
  • @TeroKilkanen Of course, some limitation is going to be realized through cgroups, some other limitations will have to be realized through running dedicated daemons with specific configuration. – Ján Lalinský Feb 07 '22 at 23:29

0 Answers0