1

One of my servers has a RAID1 array composed of two 240GB SSDs. It is managed/controlled via Linux, not via a hardware card.

Recently, for no apparent reason, the array needed rebuilding. I rebooted the server a few times recently, so perhaps there was a failed shutdown that forced it.

However, the rebuild took significantly longer than expected (~5 days), which makes me wonder if one of the drives is failing.

cat /proc/mdstat shows:

root@i3261:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
      242153280 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb5[1] sda5[0]
      7768000 blocks super 1.2 [2/2] [UU]

unused devices: <none>

The delta between the blocks seems very high. Especially since these SSDs are supposedly identical.

Does this indicate a drive failing?

Jeff Widman
  • 2,465
  • 4
  • 24
  • 20
  • What delta between the `blocks`? You realize those are two completely different and logically unrelated devices, right? – Michael Hampton Apr 09 '16 at 21:42
  • Maybe I'm misunderstanding the output here--I thought `md0` and `md1` referred to to physical devices that were hooked together in a single raid1 array. I don't understand what the `sdbX`/`sdaX` part means though... is this instead indicating there are two separate RAID arrays? – Jeff Widman Apr 09 '16 at 21:55
  • Exactly. Who set this up? `md0` and `md1` are your logical devices. `sda` and `sdb` are your physical devices. – Michael Hampton Apr 09 '16 at 21:57
  • Thanks! It's a mostly unmanaged server, but the host that I'm renting it from setup the RAID1. Maybe they just setup a holdout partition for some management software or linux image, I don't know. – Jeff Widman Apr 09 '16 at 23:36

0 Answers0