2

Recently I've purchased a couple of NVME drives for a home server I'm running.

I decided to make it more budget friendly, and potentially improved IOPs by choosing an Add on card by Supermicro that uses a PLX bridge on a pcie slot to allow two 2.5 form factor ssds to be connected at once.

The model of the card is: AOC-SLG3-2E4

I've booted to linux on a separate regular SSD and can see the card and the drives working fine with the following checks:

liang@Sonny:~$ lspci |grep 0953
06:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)
07:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)
liang@Sonny:~$ lspci |grep PLX
04:00.0 PCI bridge: PLX Technology, Inc. PEX 8718 16-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev aa)
05:01.0 PCI bridge: PLX Technology, Inc. PEX 8718 16-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev aa)
05:02.0 PCI bridge: PLX Technology, Inc. PEX 8718 16-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev aa)
liang@Sonny:~$ 

The problem is that I'm not getting the advertised 900mb/s write speeds:

liang@Sonny:~$ sudo dd if=/dev/zero of=/dev/nvme0
dd: writing to ‘/dev/nvme0’: Invalid argument
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000297179 s, 0.0 kB/s
liang@Sonny:~$ sudo dd if=/dev/zero of=/dev/nvme0n1
^C7058361+0 records in
7058361+0 records out
3613880832 bytes (3.6 GB) copied, 14.3664 s, 252 MB/s
liang@Sonny:~$ sudo dd if=/dev/zero of=/dev/nvme1n1
^C764433+0 records in
764433+0 records out
391389696 bytes (391 MB) copied, 2.48995 s, 157 MB/s
liang@Sonny:~$ sudo dd if=/dev/nvme0n1 of=/dev/nvme1n1
^C930417+0 records in
930417+0 records out
476373504 bytes (476 MB) copied, 2.98179 s, 160 MB/s
liang@Sonny:~$ sudo dd if=/dev/nvme0n1 of=/dev/nvme1n1
^C23402049+0 records in
23402049+0 records out
11981849088 bytes (12 GB) copied, 59.4382 s, 202 MB/s

As can be seen it's only around 200mb/s I've checked that it's not a CPU bottleneck, and on my regular SSD that I've booted to I'm getting 300mb/s:

liang@Sonny:~$ sudo dd if=/dev/nvme0n1 of=/home/liang/asdfasdf
^C3717510+0 records in
3717509+0 records out
1903364608 bytes (1.9 GB) copied, 5.71793 s, 333 MB/s

Anyone have a similar experience? Do some drivers need to be installed? Is it potentially the card that is at fault? Or is there something software related that's creating overhead in the transfer speeds?

Cheers.

Edit additional details

liang@Sonny:~$ uname -a
Linux Sonny 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Motherboard: ASUS Z10PE-D8 Disk Product code: SSDPE2MW400G4R5 (Intel 750 serises NVME ssd 400GB capacity)

ewwhite
  • 197,159
  • 92
  • 443
  • 809
Liang
  • 133
  • 6

1 Answers1

1

You are benchmarking it wrongly: issuing dd if=/dev/zero of=/dev/nvme0 you are using 512 bytes writes, which are clearly very small.

Try using dd if=/dev/zero of=/dev/nvme0 bs=1M and you will have much greater number.

As a side note, using 512 bytes writes, the directly attached disk has higher performance by the virtue of lower latency (the PLX inevitably add some latency).

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • Doh! one of those moments when I really should've thought of that, thanks :). Got some good results out of it: `8437+0 records in 8437+0 records out 8846835712 bytes (8.8 GB) copied, 7.33284 s, 1.2 GB/s` I think the reason it slipped my mind was because I didn't notice any bottlenecks anywhere else, but it seems more like it was the clock rate on the bus that was holding it back? – Liang Aug 11 '15 at 12:25
  • When using such small packets (<4KB), many thing can bottleneck your system: bus latency, bus encode/decode, disk latency, CPU load, etc.The fact that you reached almost 200 MB/s is a testament to the efficiency of the NVMe protocol vs the more conventional AHCI. – shodanshok Aug 11 '15 at 13:57