0

I have 6x WD Caviar Black 1,5 TB in Software RAID 10 / 1 / 0 (Centos 6.2 / mdadm)

cat /proc/mdstat
Personalities : [raid10] [raid0] [raid1]
md0 : active raid1 sdf2[5] sda2[0] sdb2[1] sdd2[3] sdc2[2] sde2[4]
      1023988 blocks super 1.0 [6/6] [UUUUUU]

md126 : active raid0 sde1[4] sda1[0] sdd1[3] sdb1[1] sdc1[2] sdf1[5]
      122873856 blocks super 1.2 64k chunks

md127 : active raid10 sde3[4] sda3[0] sdd3[3] sdb3[1] sdc3[2] sdf3[5]
      4330895808 blocks super 1.2 64K chunks 2 near-copies [6/6] [UUUUUU]

Info:

md0 = /boot (size 1GB) (Raid 1)
md126 = swap (size 125 GB) (Raid 0)
md127 = / (size 4,1 TB) (Raid 10 Layout : near=2)

When i benchmark the Raid 0 (6 Disks)

hdparm -t /dev/md126 
/dev/md126:
 Timing buffered disk reads:  1994 MB in  3.00 seconds = 664.59 MB/sec

When i benchmark the Raid 1 (2 Disks, rest are spares)

/dev/md0:
 Timing buffered disk reads:  384 MB in  3.00 seconds = 127.96 MB/sec

When i benchmark the Raid 10 (6 Disks)

hdparm -t /dev/md127
/dev/md127:
 Timing buffered disk reads:  1064 MB in  3.00 seconds = 354.60 MB/sec

I'm not 100 % sure could the problem be, that the raid 10 is on sd[a-f]3 The chunk-size is only 64kb cause i use this server only for MYSQL (The Mysql Database will very big, cause of that i have it on these big HDDs, i will need soon all the TB)

Another question: About a server config, im planning to buy a new server

Quadcore
1TB HDD
60GB SSD
8-16GB Ram

Now i was thinking to put on the 1TB (OS / Data) and on the SSD (MYSQL / SWAP)

Would that be good, i need only performances, i have enough backups. (I'm not planning to use the SWAP, but if it would be used, i thought i put it on the SSD cause its faster than on the HDD)

Thanks for help.

user1015314
  • 125
  • 4

1 Answers1

4

First of all hdparm is not a real benchmarking utility, it isn't nearly rigorous enough to demonstrate true performance. Better tools are iozone or iometer.

Secondly, your results can be explained with one observation.

  1. Number of disks matters.

Consider this:

  • Your R0 test had 6 disks involved in reading.
  • Your R1 test had 1 disk involved in reading.
  • Your R10 test had 3 disks involved in reading.

In light of that, your results make pretty clear sense.

  • 6 disks = 664 MB/s (or 110.7 MB/s per drive)
  • 1 disk = 128 MB/s (128 MB/s per drive)
  • 3 disks = 355 MB/s (or 111.7 MB/s per drive)

That's a linear scale there. It also shows pretty well that "With RAID1, reads are performed from both sets of the mirror" is not actually true with mdraid. Or at least not as they're performed with HDPARM, which is generally one long sequential read where prefetching gets maximal efficiency and going to another disk makes no performance increases.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300