1

I'm developing a shared linux server for a few Unix/Linux classes. Anywhere from 100 to 500 students will be using this system but expect a concurrency of no more than 50. I'm trying to set ulimits (best way?) to ensure that no one user can crash the system. This doesn't need to be ironclad security just enough to prevent a random forkbomb or intentional overload.

The system itself is moderate powerful with two sockets and 16 GB of ram. The student work won't be anything high performance mostly learning shell scripting, web application development, database interactions and so on.

This is what I have so far. I'm really shooting just form the hip here:

#Test settings for  lab
@student        hard    nproc           20
@student        hard    memlock         50000
@student        hard    locks           20
@student        hard    cpu             10

Too low? Too high? I know there are many other options but also don't want to over think this but open to other suggestions or other obvious settings missing.

Gray Race
  • 853
  • 3
  • 11
  • 22

1 Answers1

0

nproc is probably too low. I'd have expected to see an nproc figure like that on a sun server for a student cluster 15 years ago. A modern linux desktop environment will eat that with a window manager and some background tasks (note that I presume you're expecting them to log in via xdm or something similar).

mlock will restrict how much memory a user can lock into core with mmap. 50MB - maybe.