for RAID-1, you're almost certainly much better with linux mdadm software raid than any hardware raid controller. using a simple JBOD HBA with mdadm. (e.g. LSI's got a very nice 8-port SAS 6Gbps - also does SATA 6Gbps, of course - for around $200.)
the only real advantage to HW raid is if it has a non-volatile (battery-backed or the newer flash-based) write cache. and it has to be non-volatile (to protect against crashes or power failures), otherwise it's no better than linux's disk caching anyway. not all hw raid cards have battery-backup or flash installed, and not all even have it as an option.
and even then you can get the same effect by using an SSD as a write cache w/ mdadm. e.g. bcache and facebook's flashcache are two implementations of the idea. they're new, so i don't wouldn't risk using them on a production system just yet (OTOH, facebook's probably done extensive real-world testing under extremely high loads of their flashcache)
(btw, if you're talking about fakeraid - the kind of raid you get in cheap cards or built-in to mainstream motherboardds, then forget about it. using that is nowhere near as good as linux' software raid)
you do seem to have a real problem that needs to be solved, though.
it sounds as though you've got problems with one of your disks (in which case, replace it ASAP), or possibly with the sata port that it is plugged in to. try plugging the disk into another port.
also check that all the cables are securely plugged in, and that your power supply is adequate for your system (most will be, but e.g. if you have a high-end graphics card drawing 200W plus motherboard and several drives on a 300W PSU, then you'll need a better PSU).
hope that helps.
to provide a better answer, you'll need to provide more details like:
- what kind of system (esp. motherboard if its a whitebox clone rather than name-brand server)
- what kind of disk controller
- samples of error messages - eg. what does the kernel say when it kicks a disk out of the array.
PS: as a direct answer to your question, undoing a raid-1 is easy. just edit /etc/fstab so that it mounts the partition directly. and re-configure grub to suit. e.g if /dev/md0 was made up of sda1 and sdb1, then you can just mount /dev/sda1 (or sdb1) instead of /dev/md0. that's one of the really nice things about software raid 1 (dunno if you can do the same w/ hw raid cards - they tend to use weirdo proprietary formats). you should then be able to plug the other drive into the hw raid card, set it up as a degraded raid-1, reboot into /dev/sda1 as root, format the degraded raid, mount it, rsync your filesystem to it, make sure that you've the grub MBR installed on to it. you'll probably need to edit /etc/fstab after you copy it. then reboot to use the degraded raid as your root fs.
if (and only if) that works, you can shutdown, pull the other drive out of the non-raid slot and plug it in to the hw raid card and add it to the raid-1 array. then reboot, and you're done.
NOTE: DON'T BOTHER DOING THIS if the "raid" card is fakeraid. it's not real hardware raid, but has all the disadvantages (and more) of hw raid without ANY of the advantages. software raid is much better.
Rick Moen has a great page on linux sata & raid controllers at http://linuxmafia.com/faq/Hardware/sata.html. it'll explain why fakeraid is worse than useless.