3

So my general question is does a multiple port NIC supply the same redundancy that two complete NICs do? My Specific question is in what is the case with the 4 port NIC that comes in Dell R610s.

I know we can all guess that the separate NICs are better and don't have shared components, but does anyone actually know?

I know in the end you want multiple servers etc, but I still think it is nice to have redundant NICs setup in a server as well. I'd rather not debate this topic, my interest is in the two different NIC setups.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
Kyle Brandt
  • 83,619
  • 74
  • 305
  • 448

4 Answers4

1

What's the saying about famous saying about eliminating single points of failure... "Oh, that way madness lies"?

The 4 port NICs on the R710 servers, of which I have several out in the field, are a single Broadcom PCIe device with 4 individual PHYs. A single failure of a PHY probably won't take out the entire device, but a driver going flaky well could. If you're concerned about driver failure you might want to put another, non-Broadcom, NIC in one of the PCIe slots.

I'm running my R710's on VMware ESXi and using 3 of the NICs for connection to the LAN and 1 for the service console. When I get an iSCSI SAN at one of the Customer sites I'll be adding a dual-port PCIe NIC to service the SAN. I've been happy with the configuration, though I don't have NIC-level redundancy for the service console.

Evan Anderson
  • 141,881
  • 20
  • 196
  • 331
  • To illustrate this point, we recently encountered an issue where ESXi 4.1 added an active hardware broadcom iSCSI initiator onto the same NIC as the already-configured Software initiator (it's a 'feature' in 4.1!). This completely stuffed up some SAN LUNs under high load, spamming it with lock/reservation requests. Fortunately we had quad-port Intel NICs in there too and re-plumbed the iSCSI onto a non-broadcom interface. Quicker to solve/recover than working without the extra NIC. – Chris Thorpe Sep 26 '10 at 07:39
1

"Yes" and "no". :)

In the scenario where I have used multi-port cards, I've had one model of card in service, one in each machine that needs one (I hesitate calling them servers, these were linux machines mostly shifting packets and occasionally using iptables to filter packets, hooked into level-one ISPs and peering points on one side and internal core network on the other), with a few identical cards spare on the shelf.

At the time, the build quality of the multi-port cards was to the point that we only expected 6-9 months of trouble-free operation (after that, a card would start reliably dropping packets, even when extracted and tried in another chassis), but this is going back quite a few years, so I would expect quality to be better now.

If the build quality of multi-port cards today is on par with that of single-port cards and you're using bonded uplinks, I don't think you'll have much different availability figures.

Vatine
  • 5,440
  • 25
  • 24
0

Do I know? No, I don't work for a hardware manufacturer. Logic would agree with your other statement, though, that separate nics are better due to lack of a single point of failure; a surge or bad slot or bad connection could take out the entire card, which would kill all the ports on that card, while you'd have less chance of it happening if you had multiple cards (and multiple servers...and multiple data centers with synced data between geographic locations...) :-)

Bart Silverstrim
  • 31,172
  • 9
  • 67
  • 87
0

I'd say that a single, multiport NIC helps protect you from failures outside the server (cable, switch, routing). Multiple NICs in a server do that and ADD protection from failure of the NIC itself, the slow on the MB, etc. Of course the ultimate goal is redundant data-centers in alternate universes/timelines, so that if one universe has a catastrophic event, you're covered.

BillN
  • 1,503
  • 1
  • 13
  • 31