1

I have a Iomega IX2-200 which came with 2Tb (1.8Tb usable) space.

It has two disks set up as RAID1.

I am trying to upgrade this to 4Tb disks.

So far this is the process I have followed:

  1. Remove the 2nd disk from the IX2, and replaced it with a 4Tb disk.

  2. The IX2 automatically starts to resync / mirror disk1 (2Tb) to the new 4Tb disk.

  3. After several hours, we see the seconds disk as 1.8Tb.

  4. Replace the first disk with another 4Tb drive, and restart.

  5. The IX2 again starts mirroring disk2 to disk1.

  6. Several hours later we have 2 4Tb disk in the IX2, but with only 1.8Tb showing as available.

  7. The IX2 does not have gdisk installed, so I remove the disks, connect them to a Linux server as USB drives and run gdisk:

gdisk /dev/sdh x e

This enables me to extend the partition (type Microsoft basic data 0700).

  1. Repeat with the other disk.

  2. Now put disks back into the IX2 and reboot.

  3. Grow and resize the volume:

umount /mnt/pools/A/A0
mdadm --grow /dev/md1 --size=max
pvresize /dev/md1
  1. Check the results:
    vgdisplay
      --- Volume group ---
      VG Name               5244dd0f_vg
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  6
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               3.62 TB
      PE Size               4.00 MB
      Total PE              948739
      Alloc PE / Size       471809 / 1.80 TB
      Free  PE / Size       476930 / 1.82 TB
      VG UUID               FB2tzp-8Gr2-6Dlj-9Dck-Tyc4-Gxx5-HHIsBD

    
      --- Volume group ---
      VG Name               md0_vg
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  3
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                2
      Open LV               2
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               20.01 GB
      PE Size               4.00 MB
      Total PE              5122
      Alloc PE / Size       5122 / 20.01 GB
      Free  PE / Size       0 / 0
      VG UUID               EA3tJR-nVdm-0Dcf-YtBE-t1Qj-peHc-Sh0zXe
  1. Reboot.

  2. Result - still shows as 1.8Tb:

df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 50M  2.5M   48M   5% /
/dev/root.old         6.5M  2.1M  4.4M  33% /initrd
none                   50M  2.5M   48M   5% /
/dev/md0_vg/BFDlv     4.0G  607M  3.2G  16% /boot
/dev/loop0            576M  569M  6.8M  99% /mnt/apps
/dev/loop1            4.9M  2.2M  2.5M  47% /etc
/dev/loop2            212K  212K     0 100% /oem
tmpfs                 122M     0  122M   0% /mnt/apps/lib/init/rw
tmpfs                 122M     0  122M   0% /dev/shm
/dev/mapper/md0_vg-vol1
                       16G  1.2G   15G   8% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
                      1.8T  1.7T  152G  92% /mnt/pools/A/A0

I spotted a couple of config files with volume sizes, so I edited these:

/etc/sohoProvisioning.xml

Increasing the Size values for Ident 2 and 3 below:

<Partitions>
<Partition Ident="0" Drive="0" Size="21484429312" Device="sda1" SysPartition="1"></Partition>
<Partition Ident="1" Drive="1" Size="21484429312" Device="sdb1" SysPartition="1"></Partition>
<Partition Ident="2" Drive="0" Size="3979300000000" Device="sda2" SysPartition="0"></Partition>
<Partition Ident="3" Drive="1" Size="3979300000000" Device="sdb2" SysPartition="0"></Partition>
</Partitions>

Rebooted but still only 1.8Tb is usable.

Update 1

Following the first answer suggestion I ran:

lvresize -l +100%FREE /dev/mapper/5244dd0f_vg-lv58141b0d

Then I ran :

xfs_growfs  /mnt/pools/A/A0
meta-data=/dev/mapper/5244dd0f_vg-lv58141b0d isize=256    agcount=4, agsize=120783104 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=483132416, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

But the array size is unchanged:

root@nmsts1:/# mdadm -D /dev/md1
/dev/md1:
        Version : 01.00
  Creation Time : Mon Mar  7 08:45:49 2011
     Raid Level : raid1
     Array Size : 3886037488 (3706.01 GiB 3979.30 GB)
  Used Dev Size : 7772074976 (7412.03 GiB 7958.60 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

I seem to have broken the second disk so the array is only showing /dev/sda, but even with one disk the resize should work shouldn't it?

TenG
  • 143
  • 6

5 Answers5

3

You did everything except the last two steps:

  • Resizing the logical volume. You have a 1.82TB free showing in your vgdisplay, so you've done everything up to this point correctly. Now you just need to resize the LV. For example:

    lvresize -l +100%FREE /dev/mapper/5244dd0f_vg-lv58141b0d
    
  • Finally resizing the filesystem within the logical volume. How to do that varies depending on what filesystem you used, but this information is not available in your post.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
  • Thanks. I tried the `lvresize` then `xfs_growfs` but no change. I'll update the question with this info and outputs of pvdisplay , lvdsiaply and xfs_info. – TenG Sep 19 '20 at 16:57
  • @TenG You forgot the output from `lvresize`. – Michael Hampton Sep 19 '20 at 17:36
  • See my answer below. Your help proved invaluable in getting me to achieve the goal. I think the firmware upgrade made the system aware/capable of handling the larger disk, after which the lvresize and xfs_growfs worked. Thanks. – TenG Sep 20 '20 at 08:55
0

Following the advice from Micheal I tried the lvresize and xfs_grow but no difference.

I'd also somehow managed to 'break' the 2nd disk.

In desperation I found this article:

https://www.computerworld.com/article/2717174/vsphere-upgrade-saga--upgrading-the-storage-on-your-iomega-ix2-200.html

Which lead me to apply the firmware upgrade of my Iomeaga IX2-200 (from around 2012) to 3.2.16.30221. I downlaod the .tgz file and presented this to teh IX2's web control panel app.

The upgrade took a while.

After upgrade the web app starting reporting mixed messages about the storage - the main progress bar thing suggested 50%, i.e. it was now seeing the 3.7Tb space, but df -h on the system was still reporting 1.7Tb.

So, I tried xfs_growfs and then df -h reported 3.7Tb.

Relief!!

A couple of things to note -

  1. Articles, user guides suggested the Iomega is 'nobbled' to support a mox of 3Tb - I have successfully swapped in 4Tb.

  2. Suggestion would be to upgrade the firmware first.

  3. Once firmware is upgraded, follow the procedure in my question.

  4. Having another Linux machine into which you can plug the drives helps with backup and make use/install a wider range of tools that might be needed (in my case a newer version of gdisk).

TenG
  • 143
  • 6
0

after a lot of trying, i got the volume name for lvresize from lvscan. The volume name from df -k didnt work for me.

root@sc-disk1:/# lvscan
  ACTIVE            '/dev/md1_vg/md1vol1' [463.81 GB] inherit

root@sc-disk1:/# lvresize -l +100%FREE /dev/md1_vg/md1vol1

  Extending logical volume md1vol1 to 929.57 GB
  Logical volume md1vol1 successfully resized

root@sc-disk1:/# xfs_growfs /mnt/soho_storage

meta-data=/dev/mapper/md1_vg-md1vol1 isize=256    agcount=4, agsize=30396544 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=121586176, 
imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, 
version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

data blocks changed from 121586176 to 243680256

s6ch13
  • 1
0

I upgraded my StorCenter with 2x 4TB. And I believe I spent little time fixing this.

1- Take out 1 old (1TB) drive and put in 1 new (4TB) drive.

2- Let it boot and restore the RAID-1

3- Take out the second old drive and put in the second new drive.

4- Go to "Drive management". Set RAID to 0... After a minute or so the NAS was done configuring that.

5- Set the RAID back to 1 and... PRESTO! The NAS rebuilded the RAID with the right (4TB) capacity.

Thanks to:

https://alfredomarchena.wordpress.com/2012/01/31/upgrading-iomega-ix2-200-to-bigger-hard-drives/

Tesander
  • 1
  • 1
0

I have a ix4-200, after having changed four disk (because they failed over time) I followed some steps from the original post and comments, and successfully grown my disk.

These are the steps I performed:

mdadm --grow /dev/md1 --size=max

pvresize /dev/md1

lvresize -l +100%FREE /dev/48c2abaf_vg/lv6cbadd06

xfs_growfs /dev/48c2abaf_vg/lv6cbadd06
MariusPontmercy
  • 677
  • 4
  • 15