5

I support an internal application that stores files on a Windows Server 2003 SP2 file share. Because of how it is currently configured to store files, one folder has ~116,000 files in it (another has ~65,000, and the other folders have less, but still several thousand in each). The application has become very slow to write files to.

The file layout is configurable to an extant, so I'm trying to come up with a better plan. Does anyone have any experience on how many items per folder SMB can handle before it starts to become unusable? In this case, it's been slow for quite a while, but it didn't start become unberable until the folder exceeded 100,000 files.

Jeff Hardy
  • 175
  • 1
  • 6

3 Answers3

4

It's more dependent on bandwidth and latency (especially latency) than it is on the number of files and scaling in the algorithms being used to enumerate the directory. There is no "magic number", I guess, is what I'm saying.

The SMB protocol is hideous for requiring lots and lots of round trips. That number of files, with double the latency, would be many times more than twice as slow, for example.

You've done the benchmarking by accident for your LAN, your network infrastructure's latency, and your server computer's IO subsystem latency. You've obviously found a "magic number". I'd get that directory pared down until performance gets better. There's no other way!

Evan Anderson
  • 141,881
  • 20
  • 196
  • 331
4

Evan's right, there is no magic number. It depends on the app and the server. Upgrading to server 2008 will help, and is the first thing I'd do, as long as the client is vista or better as they use SMBv2. I've got shares that have 500,000 files that browse like crap but, since the users only use direct paths provided, work fine. On the same server I've got shares with 100,000 files that users have no issues with.

Jim B
  • 24,081
  • 4
  • 36
  • 60
-1

Partially related:

Stop ‘last access update’

Whenever you access a folder on a NTFS drive, Windows XP updates that folder and all subfolders with a time stamp with the date of last access. Sometimes, this can slower windows performance.

To change this, open REGEDIT ( Start -> Run -> ‘regedit’ ) and navigate to HKEY_LOCAL_MACHINES\System\CurrentControlSet\Control\FileSystem

Create a new DWORD value ( right click -> new -> DDWORD Value ) called ‘NtfsDisableLastAccessUpdate’ and set the value to ‘1’

and

Disable unnecessary naming convention

For each file created, windows xp use one additional name for MSDOS compatibility: 8 character name followed by a “.”, then 3 characters for the extension. If you don’t intend to use DOS only software, this is waste of memory.

To change this, open REGEDIT
Navigate to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\FileSystem

Change the value of the NtfsDisable8dot3NameCreation key to ‘1’

( taken from http://basiccoms.blogspot.com/2008/08/windows-xp-performance-tweaking-guide.html )

Gegtik
  • 188
  • 5
  • While these will affect NTFS performance, they will not affect SMB performance. In addition I would not remove last access update on a server unless you have really good reason to (or want to live with the inability to see when a file was last modified)- However these are fine on a workstation (with the same caveat about time- which most users don't need) – Jim B Jul 10 '09 at 18:00
  • 1
    Last accessed time isn't last modified time. Even so, you shouldn't really "trust" last modified time anyway. In NT unprivileged users can set any of the MAC times to whatever they want. – Evan Anderson Jul 11 '09 at 01:36