I have several mounts shared via NFS. They contain loads of files, from text files to RAW photos.
Issuing find
on them is rather painful as even on 1GbE link it just doesn't happen as smooth as on local FS, even kept on spinning rust.
Once i run find on given directory, subsequent reads are lightning fast. Keep in mind that i am talking about attributes as size, access rights and placement on directory tree, NOT the actual files' contents.
At first i thought that i could parse somehow ls -lR /nfs/mountpoint
, put it into DB and run my "finds" over DB data, but i thought that there's maybye already something which can keep "snapshots" of remote/given filesystem's metadata, possibly refreshing periodically?
I'd like to emphasize on fact that files's content caching is beyond scope of my question.