0

I have Software Raid 10 setup, it has been working great a few months.

When I did a quick HD speed test:

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

It is taking forever and I had to terminate it, the result was:

1073741824 bytes (1.1 GB) copied, 151.27 s, 7.1 MB/s

What causing this?

[root@host ~]# cat /proc/mdstat
Personalities : [raid1] [raid10]
md2 : active raid1 sdc1[4] sdd1[3] sdb1[1] sda1[0]
      204788 blocks super 1.0 [4/4] [UUUU]

md127 : active raid10 sdc4[4] sdb4[1] sdd4[3] sda4[0]
      1915357184 blocks super 1.2 256K chunks 2 near-copies [4/4] [UUUU]

md1 : active raid1 sdc3[4] sdb3[1] sdd3[3] sda3[0]
      8387576 blocks super 1.1 [4/4] [UUUU]

md0 : active raid1 sdc2[4] sda2[0] sdb2[1] sdd2[3]
      10484668 blocks super 1.1 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

Uptime:

[root@host ~]# uptime
 18:50:28 up 105 days, 11:34,  1 user,  load average: 0.04, 0.05, 0.00

Memory

[root@host ~]# free -m
              total       used       free     shared    buffers   cached 
 Mem:         15893      15767        125          0        461    14166
 -/+ buffers/cache:       1139      14753 
 Swap:         8190          9       8181

Server Spec:

Xeon E3-1230

16GB DDR-3 ECC

4 x 1TB 7200 RPM SATA (software RAID)

I have notice when I do fdisk -l command - get bunch of invalid partition, is that anything to do with it? If so how to fix?

[root@host ~]# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000d1a79

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2              26        1332    10485760   fd  Linux raid autodetect
/dev/sda3            1332        2376     8388608   fd  Linux raid autodetect
/dev/sda4            2376      121601   957679840+  fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000303b7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          26      204800   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2              26        1332    10485760   fd  Linux raid autodetect
/dev/sdb3            1332        2376     8388608   fd  Linux raid autodetect
/dev/sdb4            2376      121601   957679840+  fd  Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1          26      204800   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdc2              26        1332    10485760   fd  Linux raid autodetect
/dev/sdc3            1332        2376     8388608   fd  Linux raid autodetect
/dev/sdc4            2376      121601   957679840+  fd  Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0006436c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1   *           1          26      204800   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdd2              26        1332    10485760   fd  Linux raid autodetect
/dev/sdd3            1332        2376     8388608   fd  Linux raid autodetect
/dev/sdd4            2376      121601   957679840+  fd  Linux raid autodetect

Disk /dev/md0: 10.7 GB, 10736300032 bytes
2 heads, 4 sectors/track, 2621167 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 8588 MB, 8588877824 bytes
2 heads, 4 sectors/track, 2096894 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md127: 1961.3 GB, 1961325756416 bytes
2 heads, 4 sectors/track, 478839296 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 524288 bytes
Disk identifier: 0x00000000

Disk /dev/md127 doesn't contain a valid partition table

Disk /dev/md2: 209 MB, 209702912 bytes
2 heads, 4 sectors/track, 51197 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn't contain a valid partition table
sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
I'll-Be-Back
  • 693
  • 3
  • 10
  • 25
  • To get a sane answer, you're going to need to provide a lot more information about your hardware. – EEAA Nov 12 '12 at 00:00
  • @EEAA I have updated more information. – I'll-Be-Back Nov 12 '12 at 00:17
  • Can you add the exact model of your hard disks and the output of `fdisk -lu /dev/sda` (and possibly `sdb` and `sdc` too)? If your hard disks are of the "Advanced Format" variety, you should take care that partitions be 4k-aligned. Read more about this [here](https://wiki.archlinux.org/index.php/Advanced_Format). – pino42 Nov 12 '12 at 23:40

1 Answers1

1

I can think of several potential reasons for such a result:

  1. Way too many fdatasync() operations: in this case the throughput is limited by the number of transactions that a hard drive can perform within a specific timeframe. For traditional rotating media with a single head per surface the absolute maximum is determined by the speed for rotation. E.g. for data blocks of 64K, you would only get

    7200rpm * 64K/rotation = 7,680 K/sec
    

    I suspect that due to the semantics of RAID 1+0, each write would only hit a single mirror each time, since the dd block size is smaller than the array chunk size. This would limit the performance of the array to that of one single mirror, which for writes is that of a single drive.

    On my system dd with these options only performs one single fdatasync() call before quiting - perhaps yours does one per block? Running dd under strace would tell you whether this is the case.

  2. A filesystem mounted with the sync data option would also exhibit similar behavior.

  3. A hardware issue - I have frequently seen such delays due to faulty cables and failing hard drives. I would suggest perusing your system logs and running smartctl on all your drives. A read test would also be useful, since it should also be affected by a hardware problem - how does your array fare with reads?

thkala
  • 1,210
  • 1
  • 13
  • 19