I wish to hammer the io of some disks for an extended period of time and become aware whenever I can't read from a block (or some other symptom letting me know of an issue with backend storage). There are some benchmarking tools that write for a few seconds and show you results, but I want to do long-term testing.
So far what I can think of is writing to disk via dd and reading from that file to /dev/zero . I would need to loop it so as to keep reading and writing after it has finished the initial run. As for having an insight into disk health, I suppose dd may terminate if it can't read or write? Otherwise I may not know if there is an issue.
The other idea is to run bonnie++ in a loop. It's hard to tell what's doing in the background and how much ram it is actually using instead of disk (seems they try to get around this by telling you to have a large data amount written; larger than your ram allocation). Then, the output it gives you is pretty hard to read. But this should suffice for writing and reading if I use a bash loop to run it constantly.
Thoughts?