1

I have 4 drives, 2x640GB, and 2x1TB drives. My array is made up of the four 640GB partitions and the beginning of each drive. I want to replace both 640GB with 1TB drives. I understand I need to 1) fail a disk 2) replace with new 3) partition 4) add disk to array

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later?

Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive?

fdisk info:

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xe3d0900f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       77825   625129281   fd  Linux raid autodetect
/dev/sdb2           77826      121601   351630720   83  Linux

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc0b23adf

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       77825   625129281   fd  Linux raid autodetect
/dev/sdc2           77826      121601   351630720   83  Linux

Disk /dev/sdd: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x582c8b94

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       77825   625129281   fd  Linux raid autodetect

Disk /dev/sde: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbc33313a

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1       77825   625129281   fd  Linux raid autodetect

Disk /dev/md0: 1920.4 GB, 1920396951552 bytes
2 heads, 4 sectors/track, 468846912 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
hometoast
  • 215
  • 1
  • 9

1 Answers1

1

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.


Additional follow-up:

I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself, I can say that it is possible, it does work, and can save you time.

Avery Payne
  • 14,536
  • 1
  • 51
  • 88
  • Thanks for such a complete answer. I ended up with the raid-on-partition because at the time, I didn't want to waste the ~300GB on each of the 1TB drives. So from here out, should I add the 2x new 1TB drives as 'whole drives'? To expand to the rest of the current 1TB's, should I similarly fail, delete partitions, then re-add and resync those as well? – hometoast Apr 28 '10 at 17:43
  • 1) you don't have to if you don't want to, but if you like the idea, I would. 2) yup. Make sure the rebuild is complete after each fail/add. – Avery Payne Apr 28 '10 at 21:12
  • Thanks very much. 1 converted from partition to whole drive (only took 2.5 hours), 1 tomorrow, then I should have the new hdd's in hand for the weekend. – hometoast Apr 29 '10 at 04:30
  • ext2 does not support resizing while mounted. – Jasper Jul 07 '10 at 16:32
  • It *does* support hot resizing, but only for increasing volume sizes. See this page for more details. http://ext2resize.sourceforge.net/online.html – Avery Payne Jul 07 '10 at 20:22