0

I use the same PC and same NVMe SSDs do the raid performance test on different OS. The steps are as follows:

mdadm -C -v /dev/md0 -l0 -n4 /dev/nvme[0123]n1
mkfs.xfs /dev/md0
mount -o discard /dev/md0 /data
fio  --bs=128k  --ioengine=libaio   --numjobs=1  --direct=1 -- buffered=0  \
    --iodepth=128  --rw=write --norandommap  --randrepeat=0     --stonewall \
    --exitall_on_error --scramble_buffers=1 --group_reporting --do_verify=0 \
    --name=test  --size=50G  --filename=/data/test.txt --ramp_time=0 \
    --runtime=300 --time_based --output=128kseqw.log

On Centos8 the Bandwidth is 14GB/s , but on Centos7 the Bandwidth is only 7.5GB/s.

I don't know why there are have such a big gap?

How to test the centos7 nvme ssd mdadm raid performance ?

Mike Andrews
  • 383
  • 1
  • 7
  • Have you upgraded the kernel on that Centos 7 installation? There have been numerous changes to NVMe, the block layer, and MD that could all affect results. Centos 7 came out in 2014, and it's been stuck on that 3.10 kernel with backported fixes ever since. Try the elrepo kernels. I'd look at kernel-lt first (long term stability). – Mike Andrews Apr 19 '22 at 17:07
  • Hi Mike, Thanks for your message, yes, after upgraded the kernel the MD raid have better performance, but we want to know on the 3.10 kernel if can improve performance by modifying parameters? on kernel 3.10, the write bandwidth limited to 10GB, the read bandwidth limited to 12GB , I don't know why? – springeee Apr 20 '22 at 02:11
  • It's very likely to be `blk-mq`, but I can't say for sure. That change brought a ton of performance along, and I don't believe it has been brought back to 3.10. In general, NVMe, especially with newer hardware, is likely to behave far better on 5.x kernels. – Mike Andrews Apr 25 '22 at 14:56

0 Answers0