0

I know this has been discussed multiple times but I have not found any solution so far that worked so posting here hoping there is some solutions in Dec 2021…

I have a Dell R640 server with dual Xeon Gold processors and 384gb ram. The chassis is only sata/sas drive (does not support u.2) and don’t have budget for new server that supports u.2.

Note - my use case is to provide storage for VM to take advantage of NVMe speeds.

So we opted for PCI card - Dell SSD NVMe M.2 PCI-e 2x Solid State Storage Adapter Card 23PX6 NTRCY. It supports 2 NVME drives and connects via bifurcation to both as x4 PCI lane.

I have two Kingston 2TB nvme drives and I created mdadm based RAID1.

The write performance of a single nvme ssd is 1800MBps. But the RAID1 has write speed of 500MBps.

I found that the Bitmap= Internal was possible problem, and I applied

mdadm <dev> --grow --bitmap=none

Even after this the performance is nearly the same.

Any suggestions on what else I can try?


So I am not sure what happened - today when I ran the speed test again, the speed is within expectations -- Read of 1039 MBps and write of 1352MBps (using crystaldiskmark on a VM on this host)

           mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov 28 19:08:22 2021
        Raid Level : raid1
        Array Size : 1953381440 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953381440 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Dec  2 10:33:50 2021
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : server1:0  (local to host server1)
              UUID : 69bab65f:9daa6546:687fc567:bd50164a
            Events : 26478

    Number   Major   Minor   RaidDevice State
       0     259        2        0      active sync   /dev/nvme0n1p1
       1     259        3        1      active sync   /dev/nvme1n1p1
Nikita Kipriyanov
  • 10,947
  • 2
  • 24
  • 45
JackFrost
  • 1
  • 1
  • 1
    Can you post a little more about how you're doing that write benchmark? Also, what's the aggregate performance when you run the benchmark against both SSD's directly at the same time? And, can you post the output of `mdadm --detail` for your array? – Mike Andrews Dec 01 '21 at 15:40
  • 1
    Too long to post as comment, so answering details below. – JackFrost Dec 03 '21 at 02:00

0 Answers0