I'm traversing an entire partition, stat()
'ing each file and then checking the returned values for mtime, size and uid against hashed values. stat()
however is far too slow in Perl and I'm wondering if there's any quicker alternatives I may be overlooking.

- 129,424
- 31
- 207
- 592

- 51
- 2
-
2Why are you doing this thing? I ask because it smells like a job for rsync or a good backup tool. – Schwern Jan 07 '10 at 21:17
-
4Show us your code - stat on its own is **not** slow. – Jan 07 '10 at 21:19
-
1If your filesystem IO **is** your bottleneck and you need it faster, you may consider hardware solutions - including more RAM for your filesystem cache, RAID arrays, and SSDs (the newest SLCs from Intel in particular can absolutely whip). – Jan 07 '10 at 21:39
6 Answers
When you call stat
you're querying the filesystem and will be limited by its performance. For large numbers of files this will be slow; it's not really a Perl issue.

- 30,628
- 10
- 74
- 122
-
7This is the best answer. "stat()" is a unix system call, and the perl function of the same name is just a (very thin!) wrapper around it. If it's slow, it's slow because of the required disk I/O, and that's not something you can fix. – Andy Ross Jan 07 '10 at 22:13
Before you go off optimizing stat, use Devel::NYTProf to see where the real slow-down is.
Also, investigate the details of how you've mounted the filesystem. Is everything local, or have you mounted something over NFS or something similar? There are many things that can be the problem, as other answers have pointed out. Don't spend too much time focussing on any potential problem until you know it's the problem.
Good luck,

- 129,424
- 31
- 207
- 592
stat
is doing IO on each file which can't be avoided if you're wanting to read those data. So that'll be the limit on speed and can't be worked around any other way that I can think of.
If you're repeatedly stat
-ing the same file(s) then consider using Memoize
.
use Memoize();
sub fileStat {
my ($filename) = @_;
return stat($filename);
}
Memoize::memoize('fileStat');

- 10,555
- 1
- 31
- 31
-
1using memoize is not necessary. just do @array = stat($file) and get values from it. – Jan 07 '10 at 21:18
-
2Memoize will store all the returns values each time you call fileStat, not just a single call to stat. Yes, you could build your own cache for all of the stat return calls, but why do that when Memoize does it for you? – mopoke Jan 07 '10 at 21:44
-
Repeatedly stating the same files, though, will be doing so from the filesystem cache and thus be not nearly as slow as the disk-bound performance the poster is seeing from traversing a whole filesystem. I strongly suspect that Memoize will do no good here. – Andy Ross Jan 07 '10 at 22:14
-
3Since Memoize will all you to build a huge cache (Gigabytes if you have the RAM), it will in fact help out above and beyond the file system cache. However, what good is a cache if your looking for recent changes. Use of Memoize may not be a good idea cause it would depned on the poster's use-case. – harschware Jan 07 '10 at 22:25
You've seen that stat
is slow enough as it is, so don't call it more than once on the same file.
The perlfunc documentation on -X (the shell-ish file test operators) describes a nice cache for stat
:
If any of the file tests (or either the
stat
orlstat
operators) are given the special filehandle consisting of a solitary underline, then the stat structure of the previous file test (or stat operator) is used, saving a system call. (This doesn't work with-t
, and you need to remember thatlstat
and-l
will leave values in the stat structure for the symbolic link, not the real file.) (Also, if the stat buffer was filled by anlstat
call,-T
and-B
will reset it with the results ofstat _
). Example:print "Can do.\n" if -r $a || -w _ || -x _; stat($filename); print "Readable\n" if -r _; print "Writable\n" if -w _; print "Executable\n" if -x _; print "Setuid\n" if -u _; print "Setgid\n" if -g _; print "Sticky\n" if -k _; print "Text\n" if -T _; print "Binary\n" if -B _;

- 134,834
- 32
- 188
- 245
- If you are on *NIX, you can just use
ls
and parse the output, I should think. - As Ether mentioned,
find
is possibly a good alternative if you just want to make decisions on what you stat. - But size, date, and uid should all be available from
ls
output. - While date and size are available from the
dir
command on a Windows platform.

- 29,660
- 2
- 47
- 102
-
3On UNIX / Linux, `ls` and `find` will also use the `stat` syscall via the C library method. If these approaches improve performance it is not because of `stat` *per se*. – Stephen C Jan 07 '10 at 22:24
-
@Stephen C: It might call `stat` more efficiently though. I don't know. – Axeman Jan 07 '10 at 23:51
-
1