i am using lexical f$search in many of my DCL scripts but never faced any problem until there were around 10k files in a directory for the DCL to search from...now in this case the f$search goes slow in searching from huge number of files and i suspect there is some performance impact as well....so wanted to know if really f$search goes slow if there are huge number of files in a directory or there is some other reason for this slowness and if yes then what can be the probable reason? Please let me know if any other information is required..
2 Answers
Yes, f$search on a large directory can be slow. It can cause slowness, due t activity or it can be slowed down due to outside directory activity. Directories are simple sequential files with record for file names in alphabetic order. They start out 'dense', but over time become 'sparse'. If a new name is added to a block in the middle, and that block is too full, the rest of the directory will be shuffled up to make room.. which can take hundreds of IOs and blocks F$SEARCH. If the last entry in a block is removed, the rest is shuffled down. The shuffle used to be a block at a time, and now (6.2?) became 32 blocks ( mcr sysgen show acp_maxread ) So it all depends. Please provide more pertinent 'slowdown' description - OpenVMS version - Search pattern: Wildcard or Specific (better)? - Restarting each search or in a loop with context (better!) - Typical name pattern? Lots of variation in the first 4 or 5 characters is best.
Pertinent performance data, perhaps from T4, or at least an indication from MONI FILE and MONI FCP ?
Good luck! Hein

- 1,453
- 8
- 8
-
as Hein said, it is bad practice to have 10000 files in a directory. Do you really need to have all those files online? Check lib/insert for storing such a high number of files in a convenient way. – user2915097 Jun 10 '16 at 06:53
Consider application change. 10K files in one directory, often better is records in a file rather than files in a directory. The directory subsystem is not really meant to be used that way. (sorry not directly helpful). Newer versions of VMS allow a directory to be preallocated with a larger number of blocks ($ CREATE/DIR/ALLOCATION=nnnnn). This can help.
For remediation, you can create a new.dir with a sizable allocation (see how large the old directory is), then rename the files from the old directory to the new directory.
$ create/dir/alloc=500 disk:[new]
$ rename [old]*.*;* disk:[new]*.*;*
Then delete the old directory (should be empty) and rename the new directory to the old directory.
Obvously, only do the above if no process is creating or accessing files in the directory.

- 193
- 1
- 10