4

Our current server layout (not designed by me) has identical machines located on the same rack. We'll have all the database servers on rack A, all the fileservers on rack B, etc.

This seems dangerous to me. If there's a power surge to rack A, it could knock out our database entirely. Or if we have all the fileservers working flat out, there's more likelihood that rack B will start to overheat.

I want to ask:

  1. is my intuition correct? Is it more reliable to keep servers spread out around the data center? Or are there e.g. performance benefits to keeping master and slave database servers above one another?

  2. does this matter enough to bother fixing it?

Nate
  • 2,151
  • 6
  • 26
  • 41

6 Answers6

6

Ideally, you will have more than one phase available for power in each rack, so you can evenly distribute the power load over multiple legs in the same rack. You should also plan for heat removal under maximum load. While there is certainly no benefit to keeping database servers physically close to each other (assuming all other things are equal), your data center design should not limit your ability to do so. If it does, you have larger problems that could potentially manifest somewhere else down the line.

MDMarra
  • 100,734
  • 32
  • 197
  • 329
4

A. While it's technically true that a higher work load will consume more power and generate more heat, in a properly powered and cooled data center this shouldn't be a concern.

B. Not unless the data center is not up to snuff regarding power distribution/delivery, power protection, and cooling

joeqwerty
  • 109,901
  • 6
  • 81
  • 172
4

Physical layout of the servers has more to do with the power and cooling requirements than anything else. In a properly designed infrastructure a whole rack shouldn't be able to get a power surge, or overheat. The only benefit to seperating servers by workload is network wise. I have seen instances where the top of rack switches are overwhelmed by traffic and the uplinks cannot keep up causing delays.

Jim B
  • 24,081
  • 4
  • 36
  • 60
1

You intuition to spread out your risk is good. I would suggest addressing this by bringing redundant power to each of your racks versus moving servers physically from one rack to another.

You need redundancy at several layers. For example, redundant power supplies in your database servers.

dmourati
  • 25,540
  • 2
  • 42
  • 72
0

Much hase been said about redundant power. Just my additional two-pence: I tend to setup one of the power lines for each server running through a local UPS as well.

For the part 2 of you A) question: Yes - it might make sense to colocate master and slave database-servers: If you use a direct crossover-network-cable for the replication traffic.

Nils
  • 7,695
  • 3
  • 34
  • 73
0
  1. There are benefits to distributing your infrastructure (e.g. if you have two of everything, have two racks - each with one of everything) -- If, heaven forbid, someone should spill a 3-liter bottle of Sticky Cola all over one rack and destroy everything at least your second rack is up and running.
    Heat and power concerns should not be the primary reasons you break your infrastructure up into different racks -- As others have pointed out if you have a decently-designed room heat removal and power supply shouldn't be limiting how much you can cram into a rack, as long as you're being reasonable about it.

  2. This one is a personal call. Does it bother YOU enough to want to go through the effort of re-racking all your gear, setting up appropriate cross-connects, and performing failure/fail-over testing? If so, go for it.

Note that there ARE benefits to distributing your environment the way you described from a power perspective, but not for the reasons you seem to be intuiting (In the event your datacenter was poorly planned and racks need to be re-powered to even out load across the phases it's often helpful to have your gear separated into self-sufficient racks so you can fail over to one half of your nice, redundant environment while the other half is being re-powered by the local electrician).
Incidentally this same logic holds up for distributing across multiple rooms or colocation facilities...

voretaq7
  • 79,879
  • 17
  • 130
  • 214