2

I have a 16 disk backup RAID6 array. Currently, those 16 disks have the following specs;

  • 500 GB
  • 16 and 64MB cache
  • 3 Gb/s SATA

I would like to begin upgrading these disks if possible, primarily for storage. In a perfect world I would be able to swap out older disks with a similar setup but much greater storage, e.g. a 2 TB disk instead of a 500 GB one.

However, I guess that having different sizes drives in a RAID array is a bad idea, so does anyone have an suggestions on how to proceed?

One suggestion has been to partition the 2 TB disk into four 500 GB partitions, but I don't know if that will work (disk controller bottleneck, RAID issues, etc)?'

Update - hardware details

Operating system (from cat /etc/*-release)

CentOS release 6.2 (Final)

RAID controller (from lspci)

RAID bus controller: 3ware Inc 9650SE SATA-II RAID PCIe 

RAID version

RAID6

Disk details

WD5003ABYX-01WE (500 GB 7200 RPM, 64 MB cache SATA 3Gbps) [x12]
WD5000ABYS-01TN (500 GB 7200 RPM, 16 MB cache SATA 3Gbps) [x4]
Alex
  • 451
  • 1
  • 5
  • 15
  • 3
    We're missing an important bit of information... What type of array hardware is this? Please provide server or storage array specifications, the types of controller(s) involved and the operating systems tied to this. – ewwhite Jun 22 '13 at 20:06
  • What else do you need to know? – Alex Jun 22 '13 at 20:15

2 Answers2

3

Assuming that your raid controller supports this type of expansion, your plan should work.

However, I recommend not doing it this way. Doing a resync on an array with 16 2 tb drives in it will take forever and you will almost certainly run in to uncorrectable read errors during the resync. Therefore, your final goal should be a raid60 array. If your controller does not support that then you should instead create 2 raid6 arrays with 8 disks each and use your OS to stripe over the two arrays.

Even better would be to scrap your hardware raid and switch to something designed to handle very large drives and very large volumes. Me personal preference is ZFS. If you go with ZFS, I would recommend 3 raidz1 groups with 5 disks each and a hot spare.

longneck
  • 23,082
  • 4
  • 52
  • 86
2

On Linux mdadm RAID, I would replace every disk one by one with the new drive and grow it once all the drives have been replaced. It doesn't matter if you use 2 TB instead of 500 GB, you just won't have the 1.5 TB available, until you can grow it when they have all been replaced. Read this, for instance:

Expanding existing partitions

It is possible to migrate the whole array to larger drives (e.g. 250 GB to 1 TB) by replacing one by one. In the end the number of devices will be the same, the data will remain intact, and you will have more space available to you.

I'd contact 3Ware (LSI) tech support and ask. They've helped me with several things quite well (I actually have several servers using that RAID card).

(not really the same as your issue, but I did something similar)

Halfgaar
  • 8,084
  • 6
  • 45
  • 86