3

You can tune NTFS with different parameters in the registry and an TechNet articles states that you increase file performance by setting NtfsDisable8dot3NameCreation to 1 in the registry.

In real life, how much is gained, and is it worth it compared to having legacy compatibility?

Mikael Svenson
  • 545
  • 1
  • 5
  • 9

3 Answers3

3

per the following article it starts to help at the 300,000+ files. http://oreilly.com/pub/a/windows/2005/02/08/NTFS_Hacks.html

just remember that you might break older apps by doing so.

how much you gain depends on too many variables. I suggest you run performance monitor before and after you change the registry.

cwheeler33
  • 764
  • 2
  • 5
  • 16
1

Two valuable resources addressing the performance of this are:

See also this comment (in another blog entry): http://www.sepago.de/e/helge/2008/09/22/why-disabling-the-creation-of-83-dos-file-names-will-not-improve-performance-or-will-it#comment-714

All that said, these are arguments in favor of the feature on file servers or servers where large number of files exist (or are created in) folders (see the numbers reported above). There are certainly arguments against using the feature on client versions of Windows, where there may be backward-compatibility issues to consider.

charlie arehart
  • 240
  • 3
  • 9
0

It's interesting really when there are a lot of files/names in a directory, particularly if they have a lot of overlapping characters at the start of a bunch of long file names. File creation times will go up in these cases (because the algorithm to create a short name for a long file name has the requirement that the short name cannot already be in the index, naturally). Enumeration of the directory can also become more expensive.

Generally, if there is a lot of metadata activity in a very large directory, and no apps that depend upon short names are present, it can help a good deal. It's hard to quantify, since YMMV.

jrtipton
  • 101
  • 2
  • Could you please define "a lot of files/names". Is it 1.000, 10.000, 1 million? And how much are we talking on extra time to create a file. 1 millisecond, less, more (on a recent new machine with fast disks and several cores). – Mikael Svenson Oct 28 '10 at 08:21
  • I don't have any concrete data handy on the topic, no, sorry. My *guess* would be that it starts becoming measurable in terms of how long each affected operation takes in a very active directory with a file count on the order of 10,000 or more. Really that's just a guess. – jrtipton Oct 28 '10 at 17:22