I have two-ish questions here based off of a common setup. A quick disclaimer: I am using freenas and have not fully digested ZFS terminology and freenas butchers ZFS terminology in their UI anyway. I'll accept answers that use the terminal or the Freenas UI (bonus points for both ;)).
I have a single volume (zpool?) with 1 mirror vdev comprised of two 3TB disks.
What is the proper procedure to physically remove one of the disks and then place it back?
Probably unwisely, I removed one of the disks without doing any commands. Immediately I was alerted that the volume was DEGRADED (expected). This status persisted after I placed the drive back in (it did not seem to recognize the drive as the removed drive - or if it did I did not know how to reattach it). I rebooted the server and it now shows the volume as healthy but under Volume Manager->Volume Stats, there is a 182 in the checksum column of one of the drives but not the other (though I don't know if that was there beforehand).
- How should I have handled the situation?
- Can this cause data loss or should ZFS recover from this situation fine?
- If it can cause data loss/corruption/whathaveyou, how do I check and recover from this?
Finally, additional bonus points for links to concise ZFS primers that aren't textbooks and don't delve into uselessly obscure parts of ZFs. :P