0

Hello everyone I am wondering if there is a nice administrative tool out there to limit user resource consumption in a Linux environment.

To give more detail about my issue, I am currently trying to solve an issue where users consume entire cpu's across systems. Whether it is running a simulation or testing bad concurrent code I am having issues where a single user will max out a server or a set of computers.

I am aware of /etc/security/limits.conf but I don't see a way to limit a user/user group based on cpu usage which is kinda useless when a single process takes up multiple cores of a cpu.

If I missed anything in the manual page of limits.conf feel free to point me in that direction, also if you have another suggestion I would love to hear it!

Thanks!

3 Answers3

1

I would look at cgroups - this allows to 'slice' the system by giving shares to a process.

You can give limits in terms of memory, cpu, disk io etc.

For example to limit to 10MB of memory into a group foo

 echo 10000000 > /sys/fs/cgroup/memory/groupname/foo/memory.limit_in_bytes
silviud
  • 2,687
  • 2
  • 18
  • 19
  • Oh no!!! I was hoping to avoid cgroups! I was checking into them and I was scared but from what you said and what I already read about regarding them it looks like I will have to! –  Oct 01 '16 at 16:45
  • You can also use cgroups to dole out CPU resources using the realtime scheduler - cgroups are also likely the most sane way to do that. – Spooler Oct 01 '16 at 18:38
0

There are a few options to consider here.

One, you could set a per-user or per-group nice value in /etc/security/limits.conf. You can set it by adding lines like so:

@users      -       priority        10
username    -       priority        19

This does not stop users from creating a huge amount of processes, though. You could further limit users by limiting their max open file count, which will reduce the amount of open handles they could be using to a manageable amount.

You also might consider using virtual machines for this task. That kind of isolation would be very helpful in a situation where users are consistently saturating resources on a shared system. Using a good orchestration and deployment system such as OpenStack or CloudStack (or even just plain libvirt and KVM for smaller scales) can make things much easier, especially considering the amount of "ease of use" deployment tools for those stacks, such as Mirantis Fuel.

Spooler
  • 7,046
  • 18
  • 29
  • Hmmm I had also seen stuff about nice values but I don't know if that will accoplish what I need. The issue we have is users ssh'ing into our servers / computers and running simulations that saturate resources or executing concurrent code that fork bombs our stuff (most of the time by accident). VMs won't work I don't think because we could have anywhere from 1 user to 100 on a server doing things and I am not sure how to set up resource allocation. In a perfect world I would say that any user that is a part of blank group cannot exceed 3 cores @ 100% for more than 1 hour. –  Oct 01 '16 at 16:12
  • While it's possible to design something that does that kind of very specific scheduling, you would be subverting a very intelligent CPU scheduler that is already in use on this system. Nice values can be very effective in limiting process time, and will allow a system to more dynamically colocate users than strict resource allocation (if that's out of the question via VMs). One other thin I can think of is using the realtime scheduler for processes and disallowing other scheduling for those processes. You could guarantee a process 10% or so of your CPU time. No more, no less. It's quite strict – Spooler Oct 01 '16 at 16:16
  • Thanks so much for your input SmallLoanOf1M I do really appreciate it! I would like to do your VM suggestion however that would be a long term project for my current infrastructure but it will definitely something I will keep in mind. Thanks again! –  Oct 01 '16 at 16:43
0

You can take a look at CloudLinux os and it's "cageFs" and "lve" features. This allows you to create a user and define ram/cpu/hdd space/iops/mysql query per user.

x86fantini
  • 302
  • 1
  • 3
  • 9