2

Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec

Lowly PCIe x4 slot yields steady 1.2 GB/sec read

Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases.

SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected.

I'd like to be able to rely on the sheer speed of x16 slots.

What gives? What can I try? Any ideas? Please assist

Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx

Follow-up information

We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}.

The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array.

Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).

GregC
  • 889
  • 2
  • 8
  • 25

2 Answers2

1

This is a high-end workstation in a server form-factor. I think the 16x ports are intended for GPU use. I would use the x4 slot for now, as the benefits of the 16x are lost on the disk/controller combination you have.

The next steps would be to update firmware/OS/drivers and see if you can reproduce the problem. You may possibly want to modify power settings in the BIOS (C-states, etc.) If that does not work, you should contact Dell support for a resolution.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Boot time is a major reason why this machine was selected. Xeons and C60x chipset are the same on this workstation and two-way servers. According to http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-brief.html, 40 lanes of PCIe 3.0 logic lives inside processor. I will try the C-states. – GregC Oct 07 '13 at 13:48
  • There's nowhere to go with BIOS update: A03 cannot be downgraded. Drivers were exhibiting same behavior with one version back, as they do at the latest version. – GregC Oct 07 '13 at 13:52
  • @GregC Then you'll have to take it to Dell. I suspect that this is an edge-case, though. – ewwhite Oct 07 '13 at 13:53
  • We tried turning off C-states and other power management stuff in BIOS: this made no difference. – GregC Oct 09 '13 at 15:59
0

Our resolution has been to go with HP Gen8 server, but I suspect Dell T620 might work as well. Both of these machines have all PCIe lanes running in a planar fashion, without riser cards. Testing shows good reliable performance on HP Gen8.

GregC
  • 889
  • 2
  • 8
  • 25
  • This was eventually partially resolved by a BIOS update. Happy with HP g8 ML350p servers, even though they are bulkier. – GregC Feb 07 '19 at 16:39