4

I'm using driver e1000e for multiple Intel network cards (Intel EXPI9402PT, based on 82571EB chip). The problem is that when I'm trying to utilize maximum speed (1GB) on more than one interface, speed on each interface starts to drop down.

For one interface I get: 120435948 bytes/sec.

For two interfaces I get: 61080233 bytes/sec and 60515294 bytes/sec.

For three interfaces I get: 28564020 bytes/sec, 27111184 bytes/sec, 27118907 bytes/sec.

What can be the cause?

EDIT: /proc/interrupts content:

           CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7
106:      17138          0          0          0          0          0          0          0         PCI-MSI  eth0
114:         51          0          0          0     102193          0         20   23745467         PCI-MSI  eth2
122:         51        290         15        271          0       9253        100          0         PCI-MSI  eth3
130:         43        367          0        290        105         39         15          0         PCI-MSI  eth4
138:         43        361        105        210          0        140          0          0         PCI-MSI  eth5
146:         56      67625        100          0          0   17855245          0          0         PCI-MSI  eth6
ctinnist
  • 375
  • 1
  • 5
  • 16

4 Answers4

6

It won't be the driver.

It's most likely to be a physically shared component, such as interrupts or the PCI bus.

Dan Carley
  • 25,617
  • 5
  • 53
  • 70
2

Are they sharing the same interrupt (IRQ)? This is probably your bottleneck.

pauska
  • 19,620
  • 5
  • 57
  • 75
  • 3
    dstat is quite a nice iostat-like utility which will allow you to view the interrupt count, CPU toll and other stats. Try running it at the same time as an iperf test. – Dan Carley Jun 18 '09 at 11:37
  • I checked /proc/interrupts, and it looks like every interface has its own interrupt (i.e. each interface is in distinct line). – ctinnist Jun 18 '09 at 11:53
  • Hmm. Is this a SMP server? Debian or Ubuntu? – pauska Jun 18 '09 at 12:03
  • Yes, it's SMP, 8 cores, CentOS – ctinnist Jun 18 '09 at 12:24
  • Ok. Try to monitor /proc/interrupts while doing I/O on all network ports simultaniously. The interrupts should even out across the CPU's. – pauska Jun 18 '09 at 15:09
2

What is the endpoint of your iperf test? If you are routing through network hardware or combining all output to a single GBe NIC on another machine your bottleneck may be remote.

Andy
  • 5,230
  • 1
  • 24
  • 34
1

I've posted some sysctl magic here. You can try it, see if it helps

PS. How you benchmarking speed?

SaveTheRbtz
  • 5,691
  • 4
  • 32
  • 45
  • I'm benchmarking it by a script that grabs data from /proc/net/dev (twice) and calculates the speed. – ctinnist Jun 18 '09 at 14:28