2

Very basic doubt: I have an application running on a VM with 2 vCPU. The average load is a bit less than 50% on both CPUs.

Does that means that 1 CPU will be enough for my application? Or more CPUs benefit from threads running in parallel?

Edit: Here a real example from my system which consists of 8CPU, over a month. The data are normalised to 100%=8CPU. I wonder if this information is enough to solve my question, basically if the system is oversized.

cpu load

cpu usage

Glasnhost
  • 591
  • 4
  • 10
  • 20

2 Answers2

5

It's not quite that simple. With one thread or process that burns as much CPU as possible, it will max out at 50% CPU on a dual CPU system. Or, some systems will show it as 100% CPU, because the maximum is 200% CPU then.

If you have two threads running at 50% CPU of one core (so they don't max it out), maybe they will run about as fast on a single core, but then you will see 100%. (This is not taking into account that the machine has other things to do, and that context switching causes overhead.)

For instance, if you have two threads that sleep 50% of the time and calculate stuff the other 50% of the same, one CPU can alternate the sleeping and calculating of those two threads so that 100% of the CPU is used.

Edit:

I thought I'd show some example graphs that illustrate it.

This server runs 16 high-CPU processes, and some few hundred low-CPU. In March, I decided to upgrade it to 8 CPUs/cores (hence the jump to 800%), mainly because I had to run extra software on it for a while, which you can see.

8 CPUs

For a large part from May through July, 4 cores could have been enough. However, I do know that some of my processes (doing batch processing) would have been delayed.

This is the accompanying load graph:

Load graph

It correlates, but as you can see it's not just percentage/100.

You want such graphs to be able to make informed decisions.

Edit 2, about your graphs:

Interesting. What were these graphs made with? Both of them represent the data the opposite way I'm used to: it seems the CPU utilization goes to 100% for all cores in the system, instead of 100% per core, like my graphs. You can test it by running dd if=/dev/urandom of=/dev/null for a few hours; that will cause one core to max out, and you'll see the effect.

The load graphs shows load per core. I've never seen that before. uptime, htop, munin: they all show just load.

Halfgaar
  • 8,084
  • 6
  • 45
  • 86
  • Well so for example If have 4 CPUs and the overall load average of the whole system is 0.2%, it looks to me like 0.2*4=0.8= almost one CPU. Could I safely assume that my system is oversized and maybe reduce CPUs to 2? – Glasnhost Oct 27 '18 at 08:10
  • The concept of 'load' is something else. That is the amount of runnable processes. Depending on the OS, this may or may not include IOWAIT. This distinction is important, because if you have processes waiting for external IO, it's not burning CPU. In any case, what you should do, is collect CPU and load graphs with something like Munin, post them in your original question, together with the task description (because I still don't know how many threads you run), and then we can say if your server is oversized. Spoiler: answer may be yes, but I'm showing you the road towards the answer. – Halfgaar Oct 27 '18 at 08:17
  • I added some pictures. thank you very much for your clear explanations – Glasnhost Oct 29 '18 at 19:41
-2

You can actually never understand the requirement just by looking at CPU usage,Modern CPU generate threads in a suitable fashion as per the requirements.

More the core more advantage an application can have to run threads.

jision
  • 1
  • 1
  • To (likely) clarify the down votes: CPUs don't generate threads. It's really the software's job to do that. Even if you mean hyperthreading, the software still needs to have separate threads running. – Halfgaar Oct 30 '18 at 20:05