I need to store a very large amount of data on an hard disk. I can format it in basically any kind of format. That data is fundamental, therefore I made a copy of it. However, if some file goes corrupted, I immediately need to know it so that I can make a new copy of the only remaining file. However, while it is easy to check if the hard disk as a whole is safe and sound, the only way I can check if a file is not corrupted is to read it and hash it. For very large amounts of data, however, this is nearly unfeasible! I can't afford 10 hours of reading and hashing to check the integrity of all the files. Moreover, continuously reading the whole data would keep my hard disk spinning and therefore could get it damaged. It sounded reasonable to me, however, that some form of check could be automatically implemented thanks to the file system itself.
I know that systems as RAID exist to assure file integrity, but those involve more hard disks, right?
So my question is: given that I know that my hard disk is alive, how can I know if some data on it somewhere got corrupted? Is there any way to make that data recoverable?