0

Actually i have created software RAID 5 array md0 using mdadm linux. In this configuration i have used 6 Harddisk (4 TB each) and the final raid5 array size is 20TB approx. As we know raid 5 use n-1 rule, where n is the total number of disk and it uses 1 disk if any disk gets fail. here is my configurations:

 $ cat /proc/mdstat
     md0 : active raid5 sdc1[4] sdg1[6] sdd1[1] sdf1[5] sde1[3] sdb1[0]
  19534428160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
  bitmap: 0/30 pages [0KB], 65536KB chunk



  [root@storageserver ~]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed May 17 17:28:11 2017
        Raid Level : raid5
        **Array Size : 19534428160 (18629.48 GiB 20003.25 GB)**
     Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB)
      Raid Devices : 6
     Total Devices : 6
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Jan  9 16:37:50 2019
             State : clean 
    Active Devices : 6
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : localhost.localdomain:0
              UUID : 988759d7:91d52c10:f4e39656:2129ab64
            Events : 51388

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       49        1      active sync   /dev/sdd1
       3       8       65        2      active sync   /dev/sde1
       4       8       33        3      active sync   /dev/sdc1
       6       8       97        4      active sync   /dev/sdg1
       5       8       81        5      active sync   /dev/sdf1

When I mount this array in /mnt/wsraid directory, The linux df -h command shows that the size of this array is 7.3 TB only,

[root@storageserver ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda1                494M  260M  234M  53% /boot
/dev/mapper/centos-root   50G   43G  7.2G  86% /
/dev/mapper/centos-home  166G   79G   88G  48% /home
**/dev/md0                 7.3T  7.3T     0 100% /mnt/wsraid**

And Now i'm completely unable to copy date more then 7.3 TB, As i search on internet i could found how to get mount my full size array(20TB/18TB approx).. So guys please help me in fixing this issue. Thanks

And here is the fdisk -l command output,

[root@storageserver ~]# fdisk -l

Disk /dev/sda: 240.1 GB, 240057409536 bytes, 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x0008ff0d

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   468860927   233917440   8e  Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: A577BBCF-4DEE-47AF-9747-B03DC20700E0


#         Start          End    Size  Type            Name
 1         2048   7814035455    3.7T  Microsoft basic 
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: FBA6EDAA-1C7E-47D8-8A7C-C6CC82CB482A


#         Start          End    Size  Type            Name
 1         2048   7814035455    3.7T  Microsoft basic 
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 200CDB7A-D314-4377-B0F3-33430D804913


#         Start          End    Size  Type            Name
 1         2048   7814035455    3.7T  Microsoft basic 
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 43CF8B19-B035-49BC-A015-D9F9CDC8779A


#         Start          End    Size  Type            Name
 1         2048   7814035455    3.7T  Microsoft basic 
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdf: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 49A22211-B798-45BC-9499-0F32D2E7D1EC


#         Start          End    Size  Type            Name
 1         2048   7814035455    3.7T  Microsoft basic 
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdg: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 449A3298-D412-4AEE-897C-B307B6868F1B


#         Start          End    Size  Type            Name
 1         2048   7814035455    3.7T  Microsoft basic 

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/centos-swap: 8388 MB, 8388608000 bytes, 16384000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md0: 20003.3 GB, 20003254435840 bytes, 39068856320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 2621440 bytes


Disk /dev/mapper/centos-home: 177.4 GB, 177385504768 bytes, 346456064 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 byte

Also for debugging, here is lsblk command output:

[root@storageserver ~]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda               8:0    0 223.6G  0 disk  
├─sda1            8:1    0   500M  0 part  /boot
└─sda2            8:2    0 223.1G  0 part  
  ├─centos-root 253:0    0    50G  0 lvm   /
  ├─centos-swap 253:1    0   7.8G  0 lvm   [SWAP]
  └─centos-home 253:2    0 165.2G  0 lvm   /home
sdb               8:16   0   3.7T  0 disk  
└─sdb1            8:17   0   3.7T  0 part  
  └─md0           9:0    0  18.2T  0 raid5 /mnt/wsraid
sdc               8:32   0   3.7T  0 disk  
└─sdc1            8:33   0   3.7T  0 part  
  └─md0           9:0    0  18.2T  0 raid5 /mnt/wsraid
sdd               8:48   0   3.7T  0 disk  
└─sdd1            8:49   0   3.7T  0 part  
  └─md0           9:0    0  18.2T  0 raid5 /mnt/wsraid
sde               8:64   0   3.7T  0 disk  
└─sde1            8:65   0   3.7T  0 part  
  └─md0           9:0    0  18.2T  0 raid5 /mnt/wsraid
sdf               8:80   0   3.7T  0 disk  
└─sdf1            8:81   0   3.7T  0 part  
  └─md0           9:0    0  18.2T  0 raid5 /mnt/wsraid
sdg               8:96   0   3.7T  0 disk  
└─sdg1            8:97   0   3.7T  0 part  
  └─md0           9:0    0  18.2T  0 raid5 /mnt/wsraid

and I'm using ext4 filesystem for raid5 array, plese see fstab output too,

[root@storageserver ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Fri May 19 15:00:43 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=7fb7d6be-1fbe-4567-b895-2045fb0023f6 /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
## mounted raid device
/dev/md0    /mnt/wsraid ext4    defaults    0 2

## Here is blkid command output,

[root@storageserver ~]# blkid
/dev/mapper/centos-root: UUID="40c11708-98ef-4558-9f87-0423141ee60d" TYPE="xfs" 
/dev/sda2: UUID="kGDekH-73be-Evv0-rF4j-M9kE-fsvC-ezKi3e" TYPE="LVM2_member" 
/dev/sda1: UUID="7fb7d6be-1fbe-4567-b895-2045fb0023f6" TYPE="xfs" 
/dev/sdb1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="bd69f6f1-42c0-8072-ae15-365572fc49b0" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="9370d19b-81e1-4e0d-bd3c-b08f5974dd7f" 
/dev/sdd1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="37f5bd25-537e-b49a-a768-db12eb6f5f07" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="d89df7a3-2b3e-42a2-ad28-9c6c2dbe6275" 
/dev/sdc1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="7c08e4f6-78dc-dfc2-37f5-d7ea681b0dde" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="17a4d434-0883-430f-a0e9-7d8236bc9cc0" 
/dev/sde1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="5687ab0f-0c0f-25b8-e527-03a28463577f" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="91a34c90-3ed3-4418-abf4-bcfefcd278ae" 
/dev/sdf1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="765d94c6-d6ad-3a0a-ad95-2f36d14e5ee5" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="70091b1c-052b-49fd-9fa5-d4512c26105f" 
/dev/sdg1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="07680433-8091-fca8-7daf-70218e70c329" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="faf0474b-0cd2-45e5-b5a5-b24b43e8cfa6" 
/dev/mapper/centos-swap: UUID="80883a57-2a98-4530-a9a3-717a9573d71a" TYPE="swap" 
/dev/md0: UUID="ddc6f4a7-b22a-483f-8e51-6f1280a4d4b7" TYPE="ext4" 
/dev/mapper/centos-home: UUID="287a27dd-9eaf-42fa-a8f0-a4a618a90ad7" TYPE="xfs"
pawan1491
  • 27
  • 5

1 Answers1

1

Your ext4 filesystem is actually smaller than the block device (/dev/md0) which contains it.

Normally I would just say to run resize2fs /dev/md0 to resize it to the same size as the containing block device, but in your case this won't work, because ext4 filesystems can't be that large.

You need to recreate the filesystem using a filesystem type which can actually be 20TiB in size, such as xfs.

You may be able to change the filesystem type without losing data using the fstransform utility, which is packaged in the EPEL repository. But note that it requires some free space to operate, so because your filesystem is already full you should first resize2fs as above to give it some room to work with.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972