0

When some cpu-intensive processes are running on the server, chances are that we could not log in to this machine with error 'operation timeout' via ssh command.

According to this post answered by peterph, there should be a way to guarantee sshd service on any circumstances. But I just don't know how to create a group for sshd, give it some non-negligible CPU time share, and give this "remote access" processes much higher CPU share then the rest.

Could anyone tell me how can I configure it in "/etc/cgconfig.conf" and "/etc/cgrules.conf"? Many thanks.

KAs
  • 103
  • 1
  • 4

1 Answers1

1

I would try to fix the root cause of this resource contention rather than cgroup trickery. Even on my systems with intensive workloads, SSH access has never been an issue.

  • What type of server is this?
  • What are the specifications of the hardware/OS/distribution?
  • And what the heck is happening on the system to prevent sshd from responding consistently?

Also, is there any chance that you're facing entropy depletion?

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Cannot upvote this answer enough. Restricting resources is OK but root-cause is the best way to solve the problem. – Craig Watson Jun 12 '15 at 12:14
  • My environment is centos6.4, 60GB RAM, 24 CPU. If the root cause is iowait is very high with 2000 processes executing `hadoop get` simultaneously, will it be helpful if I set the maximum number of processes a user can fork? Thanks. – KAs Jun 12 '15 at 12:26