1

I have two 24-port switches that in turn connect to 6 24-port switches. There's a device trying to FTP into a PC at each of the leaf ports (6 x 24 devices). All at once.

On the PC end, I am trying to make sure that the bandwidth is adequate for the job. So, i grabbed a quad port 1000GT card by Intel. Teaming for performance.

Long story shot, intermittently the kernel time goes to 25% on a quad CPU system, locking up anything network-related. What would you recommend?

GregC
  • 889
  • 2
  • 8
  • 25

4 Answers4

2

Are the switches also configured to support the teaming? Most of the time the switch needs to know to team the ports together along with the NIC.

(When it doesn't, weird stuff starts happening like dropped connections and TCP issues...)

Brandon
  • 2,817
  • 1
  • 24
  • 28
  • What sort of settings should I look for? I am using Dell PowerConnect 5424. – GregC Jul 02 '09 at 03:42
  • Documentation on the Dell's is a bit sketchy. Look in the switch manual for Teaming, Link Aggregation / LAG, etc. I'm honestly not sure if this model PowerConnect 5424 supports it, but it's a good enough switch that it should... – Brandon Jul 02 '09 at 04:01
  • I guess I should ask that as a specific question. – GregC Jul 02 '09 at 04:01
  • It has LAG settings. I'll look tomorrow. – GregC Jul 02 '09 at 04:02
  • 1
    I'm fairy certain a 5242 will do an 802.3ad link aggregation group. I believe that the Intel NIC will support that as well. – Evan Anderson Jul 02 '09 at 04:08
1

I'd report the issue to Intel, for starts (assuming you have the most current NIC driver and whatever Intel calls the "Advanced Network Services" drivers today). There's no configuration that you should be able to cause that kind of misbehaviour with!

I suppose you could assign four (4) IP addresses to the NICs (breaking them out of the team, of course) and try to load-balance that way, but that's awfully hack-ish. The team would really be the cleanest thing.

Evan Anderson
  • 141,881
  • 20
  • 196
  • 331
  • I am not very sure how to tell these devices to load-balance between 4 IP addresses. I'll take it up with intel, for sure. You're always quick on the draw :) – GregC Jul 02 '09 at 03:17
  • How are the devices getting to the server right now? Are they resolving its name? You could do (*gulp*) round-robin DNS load balancing. – Evan Anderson Jul 02 '09 at 03:21
  • Right now they all go to the same IP address. – GregC Jul 02 '09 at 03:43
  • Well, round-robin DNS would be an option, if worse came to worse. I like the idea configuring an 802.1ad LAG like routeNpingme suggested. I was wrong to assume that you'd already configured a LAG on your switch. routeNpingme is absolutely right-- the switch really should know about it for the aggegrated link to perform best. – Evan Anderson Jul 02 '09 at 04:10
1

We tried teaming in our datacenter at one point to provide redundancy in the event of switch failure. Long story short, switches don't often fail, but Intel NIC drivers do.

I would highly recommend finding another solution.

duffbeer703
  • 20,797
  • 4
  • 31
  • 39
  • I have an option to use FC interconnect, as well as copper. What are tried and true brands you can recommend? – GregC Jul 02 '09 at 13:02
  • We use Qlogic and Emulex FC HBAs on our servers, and I cannot recall a problem with them in the last couple of years. Now we're starting to look at FCoE with 10Gb Ethernet, but we're probably 18 months away from a production deployment of the technology. – duffbeer703 Jul 03 '09 at 21:15
  • I figured out a good way to massage Intel drivers to a working state. See my answer to this question. – GregC Jul 09 '09 at 10:55
1

Looking closely at Intel NIC docs, it appears that using windows built-in ops, such as disable/enable a network connection, or any of the device driver ops are ILLEGAL when you have created a team.

Rather, manage team as a whole via device manager plug-in, by double clicking on the team device.

And, of course, having trunking / LAG on switch helps tremendously.

GregC
  • 889
  • 2
  • 8
  • 25