I am having a quite complex problem and while I found solutions to individual steps (and already applied some of them in different contexts), I am not quite sure on how to do the whole procedure properly. The system is a 24/7 development ubuntu 12.04 server and data loss is absolutely inacceptable, downtime is ok. So, right now the server is running a raid-6 with 5 2.5TB disks, totalling 7.5TB of storage. One disk is beginning to fail and since space is beginning to get scarce, we decided to increase the disk space while replacing it. Summing up...
NOW: 5 disks 2.5TB, software RAID-6 7.5TB, on top of that LVM, /boot is on a separate drive, all other file systems are on this RAID
AFTER: 4 disks 4TB, software RAID-6 8TB (with option to add more disks in the future), on top of that the same file hierarchy
I know how to increase the disk space by replacing each of the 5 disks one by one (will take ages but acceptable). After the last disk is fully synced the raid volume should be automagically bigger (12TB) and LVM should be able to take advantage of the new space. Please correct me if I am wrong here. However, since we want to put in only 4 drives I am unsure how to do it. The raid volume size is still bigger than what is currently used by LVM but I am not sure about the migration procedure. Unfortunately, there are only ~600GB of free space so I cannot downsize the existing RAID-6 first. Although I could imagine freeing space by copying data to an external drive.