4

We're an all Proliant shop with around 50 servers, mostly DL360s and DL380, from G5's through G7's. We just got our first two G8's in and went to rack them. We were stunned to find out that the new cable management arms protrude almost 1 inch deeper into the rack then previous iterations of the Proliant line.

Unfortunately that causes them to occupy the same space as the PDU's in our APC racks. In a non-densely populated section of rack that's no biggie, but in a densely populated section it's impossible to get the cable arm into place without dislodging another machine's power. Has anyone else run into this? Obviously racking machines without cable management arms is not an option. I supposed we could reconfigure our racks but that's a nightmare.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
Systemspoet
  • 419
  • 4
  • 10
  • 7
    Why is racking the servers without the cable management arms not an option and why would that be obvious to us? – joeqwerty Sep 18 '12 at 17:44
  • 5
    What's the question here? Just "has anyone else run into this"? – ceejayoz Sep 18 '12 at 17:46
  • 3
    The OP is concerned about how things fit. On an HP rack, I'm pretty sure there's ample clearance. But it brings up the *good* point about cable management arms and their diminished significance in the datacenter. – ewwhite Sep 18 '12 at 19:49
  • 1
    +1 to this, we've found that newer servers are indeed slightly longer. We've got some DL180's which also have this problem. Our rack posts can be moved further back, and the all the rails just extend slightly more. Lot of work though! We managed to take a section of the rack kit out on the DL180's which allowed them to fit - did it cautiously, but we couldn't see what that particular bit of metal was for anyway.... – Snellgrove Apr 10 '13 at 16:27
  • @Snellgrove I've never expanded the post-to-post distance on a running rack. I'm quite certain I wouldn't want to attempt such a thing, and I'm equally certain I wouldn't want to shut down a whole rack just to safely move the posts. I'm a bit of a wuss though :-) – voretaq7 Mar 12 '14 at 20:51
  • @Voretaq7 Agree with you completely - hence why we modified the server rack kit (maybe I wasn't that clear in my comment above) but basically there was a bit of metal on the racking kit which we removed - solved our problem. Total luck though, we did think it was looking like what you mention above, shutting down a rack and moving the posts! :-O I'd never increase the post-to-post distance on running kit, too risky. I am curious to know if the OP ever solved their problem! – Snellgrove Mar 13 '14 at 14:29

1 Answers1

15

So there are a few things in play here. Model/make of rack? Photos?

  • Perhaps your rack isn't deep enough to accommodate everything... I'm assuming you're not using HP racks. APC, maybe? Either way, see if there's some flexibility in where you can place your vertical rails on the enclosure. If there's some fore-aft slack, that could be a solution.
  • Pro-tip... don't use cable management arms. If you're on vertical PDU's, try appropriately-sized power cables for A+B feed (think 1-foot/2-foot cables). Cable management units are complicated and restrict airflow.

There are two cases I can think of for using cable management arms + telescoping rails TODAY.

  • Servers that have hot-swap RAM, PCI, etc. inside the chassis. This is rare today, though.
  • Storage servers/enclosures like the Sun x4540, where you need access to running disks inside the chassis.

I think you can get a feeling for the intended use of a particular piece of server equipment based on the mounting hardware/rails and robustness of the cable management. When my 1U DL360 systems started shipping with two pieces of velcro as the cable management, I realized that less emphasis was being placed on the ability to pull the server out of the rack while running.

The examples below have substantial railkits and cable management arms.

Hot-swap RAID RAM and PCI in a running server. HP ProLiant DL740 G1. enter image description here

Hot-swap disks in a storage server that remained running through multiple disk replacements. Sun x4540. enter image description here

From the rear... enter image description here

But really, standard 1U and 2U systems benefit most from short power cables, vertical PDUs and velcro.

enter image description here

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • But without a cable arm, you have to completely disconnect everything to slide a server out. – MDMarra Sep 18 '12 at 17:56
  • 3
    Yep, you do... but think about what you're doing on a routine basis. Disks are on the front. Easy. Power supplies are on the rear. Also easy. Anything else that requires replacement (RAM, CPU, PCIe cards) will need the server's power to be off anyway. – ewwhite Sep 18 '12 at 18:01
  • @ewwhite Not true, I've seen plenty of Dell Mobos that support hot RAM and CPU swapping. HP too for that matter. Thats one of the perks to some editions of Windows as well. – Brent Pabst Sep 18 '12 at 18:43
  • 2
    @BrentPabst Back in the day, yes. But the *current* generations of Intel/AMD servers aren't about that. Definitely not the model the OP is talking about. – ewwhite Sep 18 '12 at 19:34
  • 2
    when was the last time you hot-swapped a processor or RAM? or anything else INSIDE a server? with the reliability of these components going up and virtualization being the new norm, i find myself not installing cable management, either. – longneck Sep 18 '12 at 19:39
  • 3
    The management arm can also have a negative effect on the airflow. – 3molo Sep 18 '12 at 19:52
  • 5
    As soon as I saw the top of this answer, I thought "That's gotta be ewwhite. He has the best server porn". And I was right. – Mark Henderson Feb 28 '13 at 22:42
  • Server on server action baby ! – Vincent Vancalbergh May 07 '14 at 09:59