0

We have a new server with a Xeon 1270 processor. Looking at CPU-Z I see the Core Speed as 1596MHz, with a Multiplier of x16, and a Bus Speed of 99.8MHz.

I've been watching it for a while and sometimes noticed how the core speed jumps up to about 3GHz, which would match the advertised Clock Speed of 3.4GHz on Intel's website.

Is the core speed reported by CPU-Z and the clock speed on the Intel's website the same thing? Should I expect to have the core speed to be at 3.4GHz, rather than 1.5GHz?

The server doesn't have a lot of constant traffic, rather small bursts that don't seem to be enough to kick the CPU into a higher frequency. Am I correct in thinking that in my scenario, given that the CPU stays mostly constant at 1595MHz, we are getting much less than the benchmarked performance for this CPU?

Would you disable the Intel Turbo Boost? We have configured the power options to be high performance, but it doesn't seem to make a difference -- I read somewhere that the BIOS probably doesn't let Windows make the necessary changes.

Thank you,
Peter

pbz
  • 199
  • 1
  • 4
  • 12
  • Are you monitoring performance because it seems like good practice or because of a throughput issue? Does performance monitoring show a bottleneck with the CPU? As Coding Gorilla says, the modern processors will adjust their clock speed to deal with the workload facing them efficiently. – Rob Moir Sep 20 '11 at 20:20
  • I'm "monitoring" it with the goal of having fast page loads. If it really doesn't matter if the frequency is 1.5GHz or 3.4GHz, as counter intuitive as it is, I'd be willing to let it go. Are there any tests/benchmarks about this scenario? – pbz Sep 20 '11 at 20:30

1 Answers1

3

If you check wikipedia's description of "Intel Turbo Boost" (http://en.wikipedia.org/wiki/Intel_Turbo_Boost) you'll probably understand this a little better. Essentially, the CPU stays at a lower clock speed to conserve power until the system decides that it needs the extra horse power, in which case (as you noted) the CPU frequency jumps up for a few seconds to finish that workload, and then drops back down.

This doesn't mean you're getting any less performance, realistically; if the work load required more performance the clock speed would likely stay at the higher rate for a longer period of time. But since it's going right back down, then you just don't need the extra performance. I personally would leave it alone, save on your electric bill, your server is obviously not overloaded or anything so the performance is fine.

Coding Gorilla
  • 1,938
  • 12
  • 10
  • The question is how you define workload. If rending a page takes let's say 800ms, but at full speed it would take 400ms, then, proved it takes 1s for the CPU to bump up the frequency, all my pages would render in 800ms rather than 400ms. That 1s is a guess... – pbz Sep 20 '11 at 20:19
  • 1
    It doesn't work that way; the only way to increase a pages rendering speed (or any other process completion time generally) is to start doing multi-threading and stuff like that. It takes a fixed amount of time for the processor to process a set of instructions, the clock speed doesn't really affect that time. In reallity, those kinds of things are almost always more affected by things external to the CPU such as disk i/o, memory i/o, network i/o, etc. Unless you're doing some extremely intense computations, you're not losing anything. – Coding Gorilla Sep 20 '11 at 20:27
  • What do you mean the CPU clock doesn't affect page rendering? Why are people overclocking then? – pbz Sep 20 '11 at 20:38
  • Typically you will see people overclocking for things like better 3d rendering and physics (games); that requires a lot of intense calculations, which perform [marginally] better under higher CPU clocks. I've never heard of someone over-clocking a server (not that you couldn't); and it's not 100% true to say it has _no_ impact, but the impact is very minimal, I would go out on a limb and say that if you turned of Turbo Boost and forced the higher clock rate, you would not be able to see the difference in render speeds of the typical web page. – Coding Gorilla Sep 20 '11 at 20:44
  • The way I understand it, this Turbo Boost is like an automatic "underclocking" to save power and then when it thinks it needs more power it clocks it back to the "base" (which if I understand correctly is 3.4GHz, anything higher is overclocking, up to a maximum of 3.8GHz), but normally runs underclocked at about 1.5GHz. If there is very little difference between 1.5GHz and 3.4GHz then wonderful, but I would need some kind of test/benchmark. Unfortunately I don't have one in front of me to try things out. – pbz Sep 20 '11 at 20:59
  • That's essentially correct (however Wikipedia describes it as the other way around), and it's not that there's no difference in the speed. It's just not as simple as "faster cpu clock means everything gets done faster". Again, you're not going to hurt anything by disabling it, so if that's your preference then do it. – Coding Gorilla Sep 20 '11 at 21:02
  • Just to add onto what coding gorilla says, if disabling it does produce some massive leap in performance, this is a bug (whether with the motherboard firmware, OS or app) not a case of "oh that's the right way to configure that option". – Rob Moir Sep 21 '11 at 17:38