1
nice -n 19 find . -type f \( -iname "*.php" -o -iname "*.js" -o -iname "*.inc" \) \
  -exec zip live.zip '{}' \;

The above command runs on our live CentOS server as though the nice command was absent. After 60 seconds or so I can see that zip is on 'top' when using the top command. The server starts to fall over and I have to crash the command.

David Schmitt
  • 58,259
  • 26
  • 121
  • 165
zzapper
  • 4,743
  • 5
  • 48
  • 45
  • '*the Server starts to fall over*' - What do you mean by this? Secondly, `nice` is just to set a process' priority. If it is the *only* process consuming a lot of CPU time and resources, then it *will* eat up CPU time, until a process with higher priority comes along. – ArjunShankar May 15 '12 at 11:28
  • I see my mysql processes building up rapidly, then I get connection failures. Is it possible that mysqld has too low a priority? – zzapper May 15 '12 at 14:15
  • I do not think increasing MySQL priority will help if it is already higher priority than your batch job. – ArjunShankar May 15 '12 at 14:27

2 Answers2

3

nice only sets a process' scheduling priority. It does not limit how much CPU time it consumes.

So: If a low priority process wants to consume a lot of CPU time/resources, then it will get them. Until a process with higher priority comes along to hog CPU time.

The point being: If the CPU doesn't have anything else to do, then why not provide all CPU time to the process that wants it, even if it doesn't have high priority?

If you want to limit CPU usage to a % of the maximum, then consider using something like cpulimit


EDIT:

Other reasons why zip might be slowing things down horribly are:

  1. Disk I/O: Which you can control with ionice on some distros (not sure if CentOS has it by default) - David Schmitt pointed this out in a comment below.

  2. zip might allocate a lot of memory, and swap out other processes. Then when these processes wake up (say mysqld gets a query), they are sluggish. You might be able to do something about this by reducing Swappiness. But that is a system level parameter and you probably want to leave it untouched.

ArjunShankar
  • 23,020
  • 5
  • 61
  • 83
  • Arjun I understand that it would appear that once the zip grabs the CPU it doesn't let it go when a higher priority task comes along eg mysqld – zzapper May 15 '12 at 11:42
  • 'letting go' of CPU is not up to the process. The kernel interrupts lower priority processes when one of higher priority comes along. You can read more about this here: http://oreilly.com/catalog/linuxkernel/chapter/ch10.html – ArjunShankar May 15 '12 at 11:46
  • Anyway, you should consider 'cpulimit' which I mentioned in my answer. It was written with exactly the same kind of issue in mind as yours, i.e. making sure batch jobs don't hog the CPU. – ArjunShankar May 15 '12 at 11:48
  • Also the process could create enough IO to hinder the rest of the services. There is an `ionice` too in recent distributions. – David Schmitt May 15 '12 at 12:18
  • @DavidSchmitt - Yes. I have a feeling that another possible problem could be: `zip` swaps out a lot of other process' memory, and then on getting preempted, things get really slow until they are swapped back. – ArjunShankar May 15 '12 at 12:20
  • Would tar be any better? (and no ionice on my Centos) – zzapper May 15 '12 at 14:44
  • @zzapper - `tar` by itself does not do any compression. It merely archives multiple files into one. So it should be faster/cheaper. `tar` when used with `gzip` seems to be a little 'faster', at the cost of producing a slightly bigger tarball, according to [this guy's simple test](http://birdhouse.org/blog/2010/03/08/zip-vs-tar-gzip/). – ArjunShankar May 15 '12 at 15:37
  • @ArjunShankar you are right in fact I don't need to do or rather shouldn't do the compression on the live server. – zzapper May 16 '12 at 09:15
  • @zzapper - Yes `tar` will be a lot cheaper on resources if all you care about is archiving into a tarball, and not really compression. AFAIK, if you *want* to use compression, then `bzip2` has an option for doing it with low memory. Check out the 'MEMORY MANAGEMENT' section in `man bzip2` manual page. – ArjunShankar May 16 '12 at 09:22
2

From the comments I infer that your server is low on memory and this is not actually an CPU or I/O or prioritization problem at all.

Try replacing the zipping with a streaming solution like tar. This should reduce the required memory considerably:

find . -type f \( -iname "*.php" -o -iname "*.js" -o -iname "*.inc" \) -print0 \
    | xargs -0 tar cvzf live.tar.gz 

Using nice on this command remains a choice to reduce the impact further. It has to be balanced with possible longer runtimes and the other resources (especially memory) that are used meanwhile.

David Schmitt
  • 58,259
  • 26
  • 121
  • 165
  • BTW you are missing a -o in the above expression (didn't affect the speed much). I prefixed it with a 'nice' but don't know if that was necessary. – zzapper May 15 '12 at 15:21
  • the `-o` was already missing on the Q, so I added it there to. – David Schmitt May 16 '12 at 11:56
  • @DavidSchmitt "top" is saying that our mysqld is consistently running at 60% memory so I guess your suspicion that this is really a memory problem is likely correct! – zzapper May 18 '12 at 16:21