2

I am currently experiencing very bad performance using the following on an NFS network folder:

time find . | while read f; do test -L "$f" && f=$(readlink -m $f); grp="$(stat -c %G $f)"; perm="$(stat -c %A $f)"; done

Question 1) Within the loop permissions are checked using the variables grp and perm. Is there a way to lower the amount of disc I/O for these kind of checks over the network (e.g. read all meta data at once using find)?

Question 2) It seems like the NFS isn't tuned very well, the same operation on a similar network link via SSHFS take only one third of the time. All parameters are auto-negotiated. Any suggestions?

fungs
  • 61
  • 3

2 Answers2

1

Your line is performing three calls for each file; a single stat + parsing the output would be enough. For starters, re-design your script to call stat only once with stat -c "%n %G %A" ... if you need help with that, throw us a comment.

Janne Pikkarainen
  • 31,852
  • 4
  • 58
  • 81
  • Seems like I didn't quite check the f** manuals. Sorry. I posted a version using only find for meta-info. – fungs Jun 05 '12 at 16:24
1

Fastest solution I found during the last hour was:

failed=$(find -L . -printf "%p %g %M\n" | awk '{ if ($2 != "XYZ"){ printf $1; exit 1 }; if ( substr( $3, 9, 1 ) != "-" ) { printf $1; exit 2 } }')
ret=$?
test ! $ret -eq 0 && echo "Error with file $failed"

which checks for some group owner and permission as an example. This version using only find but not stat and following symbolic links is at least by a factor of 100 faster.

fungs
  • 61
  • 3