I had a RAID 6 setup in Cockpit UI with multiple partitions. There was one partition in particular that I wanted to extend from 10TB to 11TB using available space and attempted on /dev/md127p6 using "growpart /dev/md127p6 1". Afterwards I noticed I could access some of the mount points in the system under this array (two of them actually).
From that point I decided to restart (checked /proc/mdstat and it wasn't doing anything). Once the server came back up all of the partitions were gone for this raid.
Once the server was back online I noticed the size of the raid was different (from 189TiB to 143TiB). Obviously I screwed something up but I'm wondering if anyone has any ideas before I start over.
mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Mon May 17 20:04:04 2021 Raid Level : raid6 Array Size : 153545080832 (146432.00 GiB 157230.16 GB) Used Dev Size : 11811160064 (11264.00 GiB 12094.63 GB) Raid Devices : 15 Total Devices : 15 Persistence : Superblock is persistent
Intent Bitmap : Internal Update Time : Mon Aug 2 20:05:13 2021 State : clean Active Devices : 15 Working Devices : 15 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K
Consistency Policy : bitmap
Name : storback:backups UUID : c8d289dd:2cb2ded3:cbcff4cd:1e7367ee Events : 150328 Number Major Minor RaidDevice State 0 8 32 0 active sync /dev/sdc 1 8 48 1 active sync /dev/sdd 2 8 64 2 active sync /dev/sde 3 8 80 3 active sync /dev/sdf 4 8 96 4 active sync /dev/sdg 5 8 112 5 active sync /dev/sdh 6 8 128 6 active sync /dev/sdi 7 8 144 7 active sync /dev/sdj 8 8 160 8 active sync /dev/sdk 9 8 192 9 active sync /dev/sdm 10 8 176 10 active sync /dev/sdl 11 8 208 11 active sync /dev/sdn 12 8 224 12 active sync /dev/sdo 13 8 240 13 active sync /dev/sdp 14 65 0 14 active sync /dev/sdq