-1

I'm building a server for our dev test environment. It only has to support a handful of users, but it still has to have enough power to manage a 100-gig SQL Server database. (Yes, I'll add a bunch of RAM.)

How do I compare all the different Xeons for speed? For that matter, how do I compare a Xeon with a Core i7?

Intel's website has a great tool that compares everything except performance.

Jesse
  • 1
  • 2

3 Answers3

2

I think you ask the wrong question. First, the question is what people do. You want enough RAM to keep everything on disc that works, and enough CPU and IO to work with everything that is needed.

Here we go:

  • A handfull off user on a dev environment can be very taxing. OTOH they may not. I use a AMD Phenom II recently as build server / TFS server / dev database server, and while you may smile at the performance, let me tell you that during the day, 90% of the time, 5 of the 6 cores are parked.
  • What I do see is problems on the IO Side that I am happy with. I have a 4 disc RAID 10 for the OS and virtualization, and during builds the build server trashes them. Bad.

  • Now database, this will very likely not tax your CPU but be totally IO bound. You have no chance to keep 100gb in memory (on a decent budget without making the server sql only) and even then it may simply not matter because transactions go to disc. We can not say what you need here without knowing WHAT THE DATBASE DOES. 100gb sounds like a lot, but it may be dead data (texts, images, version control) or active data (financial tick time series that gets aggregated and scrubbed). I have a 800gb of the later and I run that on about 10 fast discs AND IT USES THEM UP - all while using half a core of my server. Ouch.

In general, I would gl for a server with enough RAM to really handle all your stuff, and that means you likely want (unless inactive) 16-24gb for the sql server instance ONLY. How many VM's? Depending on what and the database patterns anywhere from 8 to 32 discs may be appropriate.

I would really look at the CPU last here, unless some of the usages is calculating ray tracing stuff or video encoding on the CPU.

Depending on the active set you could go with a end user based system (at least AMD goes up to 16gb RCC on a micro atx board - just bought one, plugged in an Adaptec raid controller and a SAS cage for 8 x 2.5" discs) or need a professional system capable of handling 64+gb RAM. Really depends. TFS, BUild server and lab management + "throw away vm's for develoeprs" may really tax the system.

But, again, your CPU wont be the problem. RAM will, and discs. Discs really will. Spend a lot on discs.

TomTom
  • 51,649
  • 7
  • 54
  • 136
  • Thanks for the thoughts. RAM is definitely key for a db box, but we find that on our production db box, our CPU's are heavily taxed. So for this application, CPU speed matters more than you might think. (Of course, if I could afford it, going RAID 0 over a half-dozen SSD drives wouldn't hurt.) – Jesse Mar 11 '11 at 13:55
  • You may err here. Dont get me wrong - I dont doubt that the CPUs on your production box are heavily taxes. BUT: If your db server is starving for IO it may simply not get the data to use the CPU. If you check your production boxes you probably find out they have a LOT of hard discs that are very busy. So, you may end up with a fast cpu doing nothing BECAUSE YOUR DISCS STARVE THEM. – TomTom Mar 11 '11 at 14:19
  • I'm not suggesting buying a fast CPU _instead of_ fast disks. But since I have to buy a CPU, understanding which ones are faster than others will help inform my purchase. – Jesse Mar 11 '11 at 22:16
1

Passmark benchmarks almost all CPU's out there, very good for comparison.

PassMarks CPU Ratings

0

manage a 100-gig SQL Server database. (Yes, I'll add a bunch of RAM.)

The best performance tests are application specific. Sounds like you know your application already. But you need to be able to assess the performance before you purchase the unit (you're only designing the one server)? Ask your vendor(s) if they have a configuration with a database server that they can give you some sort of benchmarks on.

Assuming your database won't fit in memory (though it probably could), the next most likely performance bottlenecks are not the CPU. Rather, it's the I/O to the storage. How many HBAs/RAID cards/etc will interface with the storage? How many SAS/FC/etc cables to the enclosures/expanders, with how many disks each?

If your database does fit entirely/mostly in memory, then the memory throughput, uncore performance and cache sizes will have an enormous impact. Intel does not document the uncore clocks (by design?). Optimizing memory throughput often means taking advantage of NUMA, so consider a multi-socket server.

IMO, your database might see enormous bang/buck from one of the FusionIO/OCZ/LSI/etc PCIe flash storage devices. You'd probably see a drastic improvement in the worst case performance and concurrent query/update perormance.

Brian Cain
  • 299
  • 3
  • 7