I'm trying to understand how file integrity monitoring works, and I feel like I'm missing something. From what I've read, cryptographic hashes of the files to be monitored are stored in a database. Then, periodically, the hashes for those files are recalculated and compared to check for changes. Here's my problem:
If the code to periodically check the current hashes is on the same server as the code we are checking, and the server is compromised, couldn't the attacker modify the integrity checker also? This seems like it would defeat the whole point.
If however, the checker is on another machine, you would have to transfer all of the files from the server to the checker machine to calculate the hashes. This could be very time consuming and bandwidth intensive.
What am I missing?
Update: I ran across an interesting idea along this vain. For a similar problem, someone suggested using rsync to compare hashes. I know this isn't what rsync what made for, but after some initial testing it seems to work (and it's really fast). Thoughts?