It looks like /dev/sdb hasn't entirely died, but might have some
intermittent faults or some bad blocks. You can probably fail and
add the partition back on to your mirror with the current disk that
had the problem.
Here is how:
mdadm --remove /dev/md1 /dev/sdb2
(it might complain /dev/sdb2 isn't attached, that is fine)
mdadm --add /dev/md1 /dev/sdb2
Then do a:
cat /proc/mdstat
and you can watch it rebuild, including an estimate on the time it will take.
See if that works. If not (/dev/sdb2 is really damaged), you need
to fail the drive on all mirrors, remove sdb, add an identical
size drive, partition the new drive, and add
the partitions back to the mirror. If you are not sure which drive
is sdb, try this:
dd if=/dev/sdb of=/dev/null count=40000
Assuming you have an LED on the front of your server to indicate
disk activity, the one with the glowing green light on steady
during the above disk dump will be the drive sdb. (Or you could
flip this logic around, and cause sda to glow green to indicate
the drive not to remove). It is safe to Control-C the dd command
anytime after you've figured out which disk is which. The dd command
is merely reading a stream off the disk and ignoring it - it doesn't
cause anything to be written there, unless you get if= and of= mixed up.