I have a Iomega IX2-200 which came with 2Tb (1.8Tb usable) space.
It has two disks set up as RAID1.
I am trying to upgrade this to 4Tb disks.
So far this is the process I have followed:
Remove the 2nd disk from the IX2, and replaced it with a 4Tb disk.
The IX2 automatically starts to resync / mirror disk1 (2Tb) to the new 4Tb disk.
After several hours, we see the seconds disk as 1.8Tb.
Replace the first disk with another 4Tb drive, and restart.
The IX2 again starts mirroring disk2 to disk1.
Several hours later we have 2 4Tb disk in the IX2, but with only 1.8Tb showing as available.
The IX2 does not have
gdisk
installed, so I remove the disks, connect them to a Linux server as USB drives and run gdisk:
gdisk /dev/sdh
x
e
This enables me to extend the partition (type Microsoft basic data 0700).
Repeat with the other disk.
Now put disks back into the IX2 and reboot.
Grow and resize the volume:
umount /mnt/pools/A/A0
mdadm --grow /dev/md1 --size=max
pvresize /dev/md1
- Check the results:
vgdisplay --- Volume group --- VG Name 5244dd0f_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.62 TB PE Size 4.00 MB Total PE 948739 Alloc PE / Size 471809 / 1.80 TB Free PE / Size 476930 / 1.82 TB VG UUID FB2tzp-8Gr2-6Dlj-9Dck-Tyc4-Gxx5-HHIsBD --- Volume group --- VG Name md0_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.01 GB PE Size 4.00 MB Total PE 5122 Alloc PE / Size 5122 / 20.01 GB Free PE / Size 0 / 0 VG UUID EA3tJR-nVdm-0Dcf-YtBE-t1Qj-peHc-Sh0zXe
Reboot.
Result - still shows as 1.8Tb:
df -h Filesystem Size Used Avail Use% Mounted on rootfs 50M 2.5M 48M 5% / /dev/root.old 6.5M 2.1M 4.4M 33% /initrd none 50M 2.5M 48M 5% / /dev/md0_vg/BFDlv 4.0G 607M 3.2G 16% /boot /dev/loop0 576M 569M 6.8M 99% /mnt/apps /dev/loop1 4.9M 2.2M 2.5M 47% /etc /dev/loop2 212K 212K 0 100% /oem tmpfs 122M 0 122M 0% /mnt/apps/lib/init/rw tmpfs 122M 0 122M 0% /dev/shm /dev/mapper/md0_vg-vol1 16G 1.2G 15G 8% /mnt/system /dev/mapper/5244dd0f_vg-lv58141b0d 1.8T 1.7T 152G 92% /mnt/pools/A/A0
I spotted a couple of config files with volume sizes, so I edited these:
/etc/sohoProvisioning.xml
Increasing the Size
values for Ident 2 and 3 below:
<Partitions>
<Partition Ident="0" Drive="0" Size="21484429312" Device="sda1" SysPartition="1"></Partition>
<Partition Ident="1" Drive="1" Size="21484429312" Device="sdb1" SysPartition="1"></Partition>
<Partition Ident="2" Drive="0" Size="3979300000000" Device="sda2" SysPartition="0"></Partition>
<Partition Ident="3" Drive="1" Size="3979300000000" Device="sdb2" SysPartition="0"></Partition>
</Partitions>
Rebooted but still only 1.8Tb is usable.
Update 1
Following the first answer suggestion I ran:
lvresize -l +100%FREE /dev/mapper/5244dd0f_vg-lv58141b0d
Then I ran :
xfs_growfs /mnt/pools/A/A0
meta-data=/dev/mapper/5244dd0f_vg-lv58141b0d isize=256 agcount=4, agsize=120783104 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=483132416, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
But the array size is unchanged:
root@nmsts1:/# mdadm -D /dev/md1
/dev/md1:
Version : 01.00
Creation Time : Mon Mar 7 08:45:49 2011
Raid Level : raid1
Array Size : 3886037488 (3706.01 GiB 3979.30 GB)
Used Dev Size : 7772074976 (7412.03 GiB 7958.60 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
I seem to have broken the second disk so the array is only showing /dev/sda, but even with one disk the resize should work shouldn't it?