As of linux kernel version 2.6.11 you can create resizeable ext3 shares. I recently did this very thing you're asking about on a CentOS 5.4 server, which had a raid1 using a 300GB and 500GB drive. I wanted to upgrade the mirror so I bought a 1TB drive which was going to replace the 300GB drive.
First here's how I originally created the raid1 with the 300 + 500GB drives
I typically use fdisk to create a single partition on a drive, marking the partition type as fd (linux RAID). Next I use mdadm to create a software raid1 array.
mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/hda1 /dev/hdb1
Next I lay an LVM Group (I named it lvm-raid) on top of the raid1 array.
vgcreate -s 8M lvm-raid /dev/md0
Determine how many physical extents the Volume Group has available. You can use the command vgdisplay lvm-raid and look for the line "Total PE". In my example it was 59617. With this info I can now create a logical volume within the volume group lvm-raid.
lvcreate -l 59617 lvm-raid -n lvm0
Finally I put the ext3 partition on the logical volume.
mkfs.ext3 /dev/lvm-raid/lvm0
...And here's how I migrated the raid1 to the 500GB + 1TB drive
I added the 1TB drive in as a hot spare to the raid1, and got it synced as a member. Once it was synced up I failed and then subsequently removed the 300GB drive. This allowed me to bring the raid1 array up to 500GB, since the 500GB drive was now the smallest array member. My notes around this one step are lacking detail but I think I did the following:
mdadm --manage --add /dev/md0 /dev/sda1
mdadm --manage /dev/md0 --fail /dev/hda1
...wait for it to sync...
cat /proc/mdstat
...wait for it to sync...
cat /proc/mdstat
...sync is done...
mdadm /dev/md0 -r /dev/hda1
Grow the raid to the max. size of the remaining array members. In this case we have a 500GB and a 1TB drive in the raid 1, so we can grow the raid to 500GB.
mdadm --grow /dev/md0 --size=max
Once the raid array was up to 500GB, I ran the following commands to make use of the extra space available within LVM and eventually the actual ext3 share.
First I got the physical volume to make use of the extra space.
pvresize /dev/md0
Next I ran the command pvdisplay /dev/md0 to determine how many "free extents" were now available. In my case it was 23845. So I ran this command to absorb them into the logical volume.
lvextend -l +23845 /dev/lvm-raid/lvm0
Finally I ran the command resize2fs to add the extra space into the ext3 share.
resize2fs /dev/lvm-raid/lvm0
Here's what resize2fs looks like when it's running:
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/lvm-raid/lvm0 is mounted on /export/raid1; on-line resizing
required Performing an on-line resize of /dev/lvm-raid/lvm0 to 122095616 (4k) blocks.
The filesystem on /dev/lvm-raid/lvm0 is now 122095616 blocks long.
And when it's all done, df now shows this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/lvm--raid-lvm0
459G 256G 180G 59% /export/raid1