2

i have a Intel 4 node server with S2600 JF motherboard per node (older, DDR3 RAM, E5-2600 v1 and v2). I want to insert a NVMe pcie card (with two drives) that requires bifurcation.

Based on the description of the motherboard, there is a menu item in the BIOS for this (Advanced - PCI Configuration - Processor PCIe Link Speed), but I did not find this. I updated the BIOS but still can't see, the menu item is missing.

BIOS pdf

See page 91 in pdf. Can the problem be solved? Maybe it depends on motherboard revision? Or maybe it depends on CPU version? (now E5-2620 v1 cpu included, 2 pcs)

Thank you in advance for your help, Laszlo

Laszlo Malina
  • 168
  • 2
  • 10
  • Bifurcation should work automatically. – Simon Richter Dec 04 '21 at 23:36
  • Unfortunately, it does not work automatically. As I look at it, the capability itself is missing - while the motherboard description says otherwise. – Laszlo Malina Dec 05 '21 at 22:59
  • Bifurcation is just between the CPU (and the intergrated PCIe root complex) and the card, with some support from the BIOS during device enumeration. – Simon Richter Dec 06 '21 at 12:58
  • I plugged a Supermicro AOC-SLG3-2M2 nvme pcie card with two drives into the motherboard x16 connector (so I didn't plug it in a riser card). This cannot be separated by the server, e.g. 2 x8. Could it work automatically if there is a riser card in the motherboard pcie x16 that has two x16 connectors? Although I don't even know if there is such a riser card for a 1U high server. Unfortunately I don’t understand this topic yet, I just read superficially about it (and I use it on larger servers that have more x16 connectors). – Laszlo Malina Dec 07 '21 at 12:27
  • It depends on what the CPU supports, mostly, configurations need to have been anticipated in silicon. The 2600 has three PCIe ports, (x8, x16, x16) plus one x4 port that is usually used for mainboard components. If I remember correctly, Intel CPUs usually support bifurcation on one x16 port, and a nontransparent bridge (NTB) on the other, but that may be model-dependent. The [datasheet](https://www.mouser.com/datasheet/2/612/xeon-e5-1600-2600-vol-1-datasheet-263646.pdf#page=144) suggests that the PCIe ports are organised in groups of four lanes, but it is not clear which can be merged. – Simon Richter Dec 07 '21 at 13:45

1 Answers1

1

We have a cluster of these older systems, all with S2600JF motherboards and Sandy Bridge (v1) E5-2670. For one node we bought a pair Ivy Bridge Xeons from eBay (E5-2650 v2) and after swapping those in, the extra BIOS settings for PCIe speed, Bifurcation, etc appeared (under Advanced->PCI, as mentioned in the PDF).

The reason for the experiment was we had a U.2 drive (w/passive adapter) that was only showing up at PCIe 2.0 speeds (5GT/s) in the OS (lspci -vvv) and after the Xeon upgrade, the U.2 PCIe device was showing up at 3.0 speeds (8GT/s). Benchmarks before/after the CPU swap showed clear difference, and despite being year-2013 Xeon, it could push the U.2 drive at full spec R/W speeds.

The motherboard socket (FCLGA2011) is compatible with both Sandy & Ivy bridge.

Not sure if this is a BIOS bug, or a CPU hardware limitation (the ARK spec page for Sandy Bridge lists PCIe 3.0)

jason_uruguru
  • 81
  • 1
  • 2
  • Bless you, that's great info even so many years later. Maybe I'll go and swap my CPUs, now they're cheap enough after all. I managed to crossflash the NICs to 40gbe so some more NVMe performance would be nice. (especially since the SAS link speed is so low on this chipset) – Florian Heigl Dec 09 '22 at 16:14