-1

I'm a newcomer in the world of assembling servers from ground up. First of all, I can choose a server board based on my needs. Then, the getting the following components are pretty much straightforward (based on the board specifications):

- CPU(s) and their heat sinks.
- RAM 
- Hard Disk(s) 
- .. & required cables to attach these components to the server board.  

Now I need to put the server board (with the attached elements) into a server chassis. Besides knowing if I can physically fit the server board inside the server chassis, I have to know if the server chassis is suitable for the server board.

So the question is: how to choose a server chassis, based on its specifications (such as their electrical input and output requirements), for a specific server board?

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
o-0
  • 133
  • 6
  • 9
    Why are you assembling your own servers? This is almost always a bad idea. – MDMarra Aug 25 '14 at 12:35
  • 4
    Don't built your own server. Buy a complete system and spare yourself the hassles. – Sven Aug 25 '14 at 12:35
  • 6
    The norm in the industry is NOT to assemble servers by hand. It's a bit inefficient and fraught with peril :) There's likely not much of a cost savings over buying something with a *warranty*. – ewwhite Aug 25 '14 at 12:36
  • 1
    @MDMarra What about learning about how to build the machines I'm interacting everyday? – o-0 Aug 26 '14 at 09:52
  • @ewwhite I can get warranty for the individual elements I get from an online shop. Also having a warranty for the whole server, means I need to wait (i.e, losing time), for someone to fix my machines, every time something happens. – o-0 Aug 26 '14 at 09:57
  • 1
    @DaveRose That approach does not scale. The answer below by ChrisS covers the major points, but fully-integrated systems from a manufacturer with a support plan and standard components is the way to go *today*. There was a period where homebuilt and whitebox server made sense, but things have evolved. A tremendous amount of engineering goes into the design of HP, IBM and Dell servers, for instance. Anything you build by hand would be inferior from a functionality and support perspective; plus it may not even make sense cost-wise. – ewwhite Aug 26 '14 at 10:03
  • 1
    @Davr why do you have to plug a few boards in to each other to do that? Building your own servers is a support nightmare. – MDMarra Aug 26 '14 at 11:07

1 Answers1

12

Direct Answer: Ask the manufacturer what they recommend. I only know of a few manufactures who make honest server grade equipment, and they all make chassis for their boards as well - where they can ensure a proper fit, access to all components, warranty, and compatible accessories.

Indirect "More Correct" Answer: Professionals don't built white-box servers. They buy HP, Dell, or even IBM server. There's just no way to buy the components, assemble, test, and guarantee functionality cheaper than the big names can (unless you're doing it with millions of boxes, at that scale you design your own motherboards, like Google does, but that's a very different situation). Most people look at white-box servers as a way to save money - there are three fallacies to that thinking:

  1. They look at the initial price-tag only (called CapEx in accounting jargon). Businesses should always try to consider the Total Cost of Ownership (TCO), which is the initial cost (CapEx) plus ongoing costs of using, maintaining, and ultimately replacing that system (OpEx). OpEx is almost always several multiples of CapEx. Buying decisions based only on CapEx typically ends up costing more.
  2. White-box equipment commonly has incompatibilities, poor documentation, and other "unknowns". A lot of 'server grade' equipment has connections, extensions, whatever that haven't been overly standardized by massive production for retail consumers. A perfect example, many Intel Motherboards have SATA connections for their onboard SAS chips. Finding a SATA to SAS cable is very difficult because the standard specifically says the cable shouldn't exist - yet Intel decided it was reasonable because most people use SATA drives with their motherboards.
  3. Only fools pay the retail price of HP, Dell, and IBM gear. Don't contact those companies directly (unless you're a huge buyer), contact a reseller. Get quotes from different suppliers. When getting a quote from HP, tell them you're getting a quote from Dell, and vice-versa. Just mentioning the competition should get you a ~10% discount from retail prices if not better.

Experiences:

Warranties aren't just for replacing stuff when it fails. Businesses don't mind the cost to replace most parts that fail (hard drives cost a few hundred, maybe a thousand for top-of-the-line drives; power supplies are the other common component to fail, again for a few hundred at worst). But sending the failed component in, and waiting for the new component can mean long turnarounds, perhaps into several days. Long failures can mean downtime, downtime is hundreds of times more costly than the part that failed. The Tier 1 sever should come with a 4-hour or NBD (next business day) warranty, common components can literally be replaced the same day more often than not (assuming you're in a decent sized city, and call the manufacturer as soon as the component fails). Of course you can stock replacement parts yourself, but this inflates the CapEx.

The place I work at currently is an HP shop. The last time Dell quoted me laptops they included a 40% discount. We're still all HP, mainly because OpEx > CapEx, we have the procedures, documentation, infrastructure, images, software, etc already in place for HP laptops. Switching would mean redoing documentation and procedures, building new Windows images for the different hardware, learning a new support, warranty, and drivers systems.

Years ago I built my home servers from normal workstation components. I had performance and compatibility issues, but it was just a home server and only I used it, so I could live with it. Plus I took it as a learning experience. Then I got into SuperMicro gear. They're a 2nd tier manufacture, but they sell motherboards, add-on cards, and all that separately. I mixed and matched SuperMicro with other components - ran into some issues, but many less than using no-name workstation components. Today my server is a DL385 G6 with a MSA60 for disk space; switch is a modular ProCurve GL series; and I just picked up a new toy, a MSL6060 tape library for backups (all of this stuff cost me <$1000 USD and patience).

Disclaimer: Nothing against Dell, they make great equipment. I've used HP gear at work for the last decade or more, so I'm very familiar with it. I've had less than stellar experiences with IBM, but everyone's got an favorite manufacturer. I have no financial interest in any. Caveat: There are Tier 2 manufacturers who make good equipment, and there are edge cases where using their equipment is the better option. This does not apply to the vast majority of people. Don't fall into the Dunning-Kruger pit.

Chris S
  • 77,945
  • 11
  • 124
  • 216
  • Looking for a scientific answer. This should be possible based on relating the output power from server Chassis's power box and the input requirements of the server board. – o-0 Aug 26 '14 at 11:49
  • 1
    Nice - and wrong. We self build. Small shop. Main components are always and only SuperMicro (and Adaptec for the raid controller). Cases, Boards. Getting them prefabbed is hard. But I see no compatibility issues when doing that. – TomTom Aug 26 '14 at 11:49
  • @TomTom Would you please answer the question I asked in the first place? I would like to know, scientifically, how to chose a server chassis for a specific board, based on its DC intake. – o-0 Aug 26 '14 at 11:52
  • I can't answer an off-topic question. I generally judge the watt consumption to planned load - the board MOSTLY is not the main consumer (more the discs - one reason I go this way is to have 26 to 72 disc on a rack case (2u, 4u) which no Tier 1 manufacturer supports. Use enough power for the planned load. Simple. – TomTom Aug 26 '14 at 12:08
  • 1
    @TomTom Granted - You're on the fringe of my answer, using almost exclusively SuperMicro gear and a storage card that's widely regarded for the interoperability. Also, you know what you're doing much, much more than most. For someone who thinks matching "DC intake" is all that a case requires, building a white-box isn't going to end well. – Chris S Aug 26 '14 at 13:41
  • 1
    True... In @TomTom's case, it may make sense to build. Same for HPC and other specialized applications. But there's a ton of testing and research that goes into self-supporting. – ewwhite Aug 26 '14 at 14:14
  • @ChrisS DC intake is for the server board (with it's elements) and as a Chassis transforms the AC to DC; I don't think thats a bad question to ask. I know cutting the middle men will upset some people (mainly middle men); but, as a software developer, I like to learn and assemble my own servers. – o-0 Aug 26 '14 at 15:15
  • @DaveRose There's so many other factors to selecting a proper case that the capacity of the power supply is somewhere near the bottom of the checklist (and most cases support different capacity power supply, and the power the board draws is probably less than the smallest power supply available in a chassis that fits your board). If you want to go into hardware deployment and maintenance, that's fine; but don't assume the knowledge necessary to do a good job is so shallow that a quick Answer on SF is all one needs to know. – Chris S Aug 27 '14 at 13:09