25

What's the maxium number of files a Unix folder can hold?

I think it will be the same as the number of files.

Peter Mortensen
  • 2,318
  • 5
  • 23
  • 24
  • 3
    A much better question might be: How many should I use? http://stackoverflow.com/questions/466521/how-many-files-in-a-directory-is-too-many – Joachim Sauer Jan 26 '09 at 01:24
  • 1
    I'd love for my site url's to look like site.com/username/ and so on, but thinking that (if im lucky) get more than 2 million users that'd be more than 2 million folders, since I don't want to use a script such as PHP with a modrewrite i was looking at the other possibilitie of folders in a folder –  Feb 05 '09 at 15:41
  • 1
    Do yourself a favor and create subdirectories with a rewriting scheme. – Peter Eisentraut Jan 21 '10 at 12:09

6 Answers6

21

Varies per file system, http://en.wikipedia.org/wiki/Comparison_of_file_systems

basszero
  • 464
  • 3
  • 5
17

On all current Unix filesystems a directory can hold a practically unlimited number of files. Whereas "unlimited" is limited by diskspace and inodes - whatever runs out first.

With older file system designs (ext2, UFS, HFS+) things tend to get slow if you have many files in a directory. Usually things start getting painful around 10,000 files. With newer filesystems (ReiserFS, XFS, ZFS, UFS2) you can have millions of files in a directory without seeing general performance bottlenecks.

But having so many files in a directory is not well tested and there are lots of tools which fail that. For example, periodic system maintenance scripts may barf on it.

I happily used a directory with several million files on UFS2 and had seen no problems until I wanted to delete the directory - that took several DAYS.

Peter Mortensen
  • 2,318
  • 5
  • 23
  • 24
max
  • 301
  • 2
  • 7
13

It depends how many inodes the filesystem was created with. Executing

df -i 

will give you the number of free inodes. This is the practical limit of how many files a filesystem and hence a directory can hold.

Peter Mortensen
  • 2,318
  • 5
  • 23
  • 24
  • 1
    However, many filesystems have a limit of files per directory, regardless of the number of inodes free. –  Jan 26 '09 at 07:01
  • 1
    yes, but the question targeted UNIX filesystems and as far as I am aware all modern UNIX filesystems do not limit the number of files in a directory. –  Jan 26 '09 at 14:33
6

I assume you are thinking of storing a lot of files in one place, no?

Most modern Unix files systems can put a lot of files in one directory, but operations like following paths, listing files, etc. involve a linear search through the list of files and get slow if the list grows too large.

I seem to recall hearing that a couple of thousand is too many for most practical uses. The typically solution is to break the grouping up. That is,

/some/path/to/dir/a/
/some/path/to/dir/b/
...
/some/path/to/dir/z/

and store your files in the appropriate sub-directory according to a hash of their basename. Choose a convenient hash, the first character might do for simple cases.


Cristian Ciupitu writes in the comments that XFS, and possibly other very new file-systems, use log(N) searchable structures to hold directory contents, so this constraint is greatly ameliorated.

Peter Mortensen
  • 2,318
  • 5
  • 23
  • 24
  • 5
    Some modern filesystems, e.g. XFS don't involve a linear search. XFS's B-Tree technology enables it to go directly to the blocks and/or extents containing a file's location using sophisticated indices (from http://www.uoks.uj.edu.pl/resources/flugor/IRIX/xfs-whitepaper.html). – Cristian Ciupitu Jan 26 '09 at 02:30
  • Ah! I didn't know that. Thanks. Will add to the text. – dmckee --- ex-moderator kitten Jan 26 '09 at 02:37
  • 1
    For ext3, you have to activate the "dir_index" feature, cf. tune2fs(8). –  Jan 26 '09 at 11:27
0

ext3 one of the most common linux filesystem formats gets really sluggish if you have around 20k + file in a directory. Regardless of how many it can hold, you should try to avoid having that many files in one directory.

Amandasaurus
  • 31,471
  • 65
  • 192
  • 253
0

From the comment you left, I think you don't really care about how many files/folders your FS can host.

You should probably consider using ModRewrite and rewriting site.com/username to site.com/?user= or something of the kind and store all your data in a database. Creating one folder per user is generally not necessary (and not a good idea).

That said, each filesystem has limits, and df can tell you how many inodes are available on each partition of your system.

raphink
  • 11,987
  • 6
  • 37
  • 48