0

I have setup server about 20 days ago with new nvme, before setup raid mdadm the two Nvme samsung 980 pro 1TB i test bench have IO speed average 1.5G/s each nvme, but after creating raid 1, IO nvme only about 70mb/s. and still fast traditional HDD on my laptop which goes up to 120mb/s. Drive health 100% since new.

Format EXT4

Result bench.sh https://i.stack.imgur.com/HDNlY.png

[root@id1 var]# wget -qO- bench.sh | bash
-------------------- A Bench.sh Script By Teddysun -------------------
 Version            : v2022-06-01
 Usage              : wget -qO- bench.sh | bash
----------------------------------------------------------------------
 CPU Model          : Intel(R) Xeon(R) CPU v4 @ 3.60GHz
 CPU Cores          : 12 @ 3799.703 MHz
 CPU Cache          : 15360 KB
 AES-NI             : Enabled
 VM-x/AMD-V         : Enabled
 Total Disk         : 6.2 TB (1.1 TB Used)
 Total Mem          : 93.9 GB (33.5 GB Used)
 Load average       : 2.75, 3.67, 4.10
 OS                 : CloudLinux release 8.6 (Leonid Kadenyuk)
 Arch               : x86_64 (64 Bit)
 Kernel             : 4.18.0-372.19.1.lve.el8.x86_64
 TCP CC             : cubic
 Virtualization     : Dedicated

----------------------------------------------------------------------
 I/O Speed(1st run) : 78.0 MB/s
 I/O Speed(2nd run) : 78.0 MB/s
 I/O Speed(3rd run) : 78.0 MB/s
 I/O Speed(average) : 78.0 MB/s
----------------------------------------------------------------------

From mdadm detail

[root@node1 ~]# mdadm --detail /dev/md126
/dev/md126:
           Version : 1.2
     Creation Time : Wed Sep 21 07:23:19 2022
        Raid Level : raid1
        Array Size : 964901888 (920.20 GiB 988.06 GB)
     Used Dev Size : 964901888 (920.20 GiB 988.06 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Oct  2 11:55:36 2022
             State : active
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : iclnm.com:root
              UUID : aca6bf5e:5d4e7cc0:036c9b46:uyc9h8u
            Events : 1332

    Number   Major   Minor   RaidDevice State
       0     259        4        0      active sync   /dev/nvme0n1p3
       1     259        7        1      active sync   /dev/nvme1n1p3

Result from mdstat

[root@node1 ~]# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]
md123 : active raid5 sde2[3] sdd2[1] sdb2[0]
      48857088 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md124 : active raid5 sde1[3] sdd1[1] sdb1[0]
      1884835840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 7/8 pages [28KB], 65536KB chunk

md125 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
      1953728 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 nvme1n1p3[1] nvme0n1p3[0]
      964901888 blocks super 1.2 [2/2] [UU]
      bitmap: 2/8 pages [8KB], 65536KB chunk

md127 : active raid1 nvme1n1p1[1] nvme0n1p1[0]
      9763840 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Nvme mounted to /

Other SSD raid mounted on /home, above test run in / and other than home partition.

This is my first time using soft raid on linux and I don't understand if mdadm can't use Nvme drive or there is an error in the configuration. I made normal raid1 in general. I'm hoping someone can shed some light on the linux mdadm raid. I'm using almalinux 8.6

Trweb
  • 1
  • 1
  • 1
    The first enlightment is that some obscure "bench.sh" is doubtful as a test, and the proper I/O performance results *will not* be in mb/s. Use [`fio`](https://fio.readthedocs.io/en/latest/fio_doc.html) for testing, show the results, especially important measure is *latency* in msecs or usecs, not the thoughtput in mb/s. – Nikita Kipriyanov Oct 02 '22 at 06:20
  • Hell yea - what is a bench.sh? What it do? "need enlightenment" - God bless you and fio in direct mode – gapsf Oct 02 '22 at 08:37
  • 1
    Also check partition alignment with fdisk – gapsf Oct 02 '22 at 09:03
  • What fio command for right test? before making mdadm raid 1 with bench.sh i can get 1.5GB/s – Trweb Oct 02 '22 at 15:44
  • Forget bench.sh. As if it doesn't exists. "Speed in gb/s" means a little, if anything. Again, troughtput is not important when you consider storage performance, latency is. What was queue length, parallelism, latency distribution? Which I/O it did at least — writing, reading, or both; sequential or random; did it use caches, did you preheat anything? Does your tool set or display that? No? Throw it away, it is unusable. Also, yes, mastering `fio` could be hard, but that's because the storage itself is hard, which is not fio's fault. If you dare to build your own storage, learn how to do it. – Nikita Kipriyanov Oct 02 '22 at 15:53
  • Right now you just found a tool which shows some digits, but we don't know what those digits mean, if they have any meaning. While the digits that fio shows we understand and, generally, can make suggestions. – Nikita Kipriyanov Oct 02 '22 at 15:59
  • You may start wth examples from https://docs.oracle.com/en-us/iaas/Content/Block/References/samplefiocommandslinux.htm – gapsf Oct 02 '22 at 18:09
  • Start from testing sequential reads to see your gygabytes – gapsf Oct 02 '22 at 18:11
  • Download https://github.com/axboe/fio – gapsf Oct 02 '22 at 18:12
  • 2
    Did you check partition alignment? – gapsf Oct 02 '22 at 18:14

0 Answers0