11

I have millions of audio files, generated based on GUId (http://en.wikipedia.org/wiki/Globally_Unique_Identifier). How can I store these files in the file-system so that I can efficiently add more files in the same file-system and can search for a particular file efficiently. Also it should be scalable in future.

Files are named based on GUId (unique file name).

Eg:

[1] 63f4c070-0ab2-102d-adcb-0015f22e2e5c

[2] ba7cd610-f268-102c-b5ac-0013d4a7a2d6

[3] d03cf036-0ab2-102d-adcb-0015f22e2e5c

[4] d3655a36-0ab3-102d-adcb-0015f22e2e5c

Pl. give your views.

PS: I have already gone through < Storing a large number of images >. I need the particular data-structure/algorithm/logic so that it can also be scalable in future.

EDIT1: Files are around 1-2 millions in number and file system is ext3 (CentOS).

Thanks,

Naveen

Community
  • 1
  • 1
Naveen
  • 439
  • 1
  • 6
  • 15

4 Answers4

19

That's very easy - build a folder tree based on GUID values parts.

For example, make 256 folders each named after the first byte and only store there files that have a GUID starting with this byte. If that's still too many files in one folder - do the same in each folder for the second byte of the GUID. Add more levels if needed. Search for a file will be very fast.

By selecting the number of bytes you use for each level you can effectively choose the tree structure for your scenario.

sharptooth
  • 167,383
  • 100
  • 513
  • 979
  • If performance is critical, it'd be a good idea to benchmark different numbers of files in each directory. – Mark Bessey Oct 16 '09 at 06:09
  • If you have a two-level, 256-ary directory structure (such that file 1 is stored in `63/63f4/63f4c070-...`), then with 2 million files you'll get about 30 in each leaf directory - which should perform quite well and scale moderately well. – caf Oct 16 '09 at 09:31
  • @Sharptooth: Can you please explain using an example so that it will give me a much more clear picture. – Naveen Oct 16 '09 at 10:45
  • 1
    @Naveen: Let's assume you will use two levels, one byte for each. For any GUID you get you create a folder on the top level and another one in the first folder. So for 7A09BF85-9E98-44ea-9AB5-A13953E88C3D you create 7A and 7A/09 folders and put the file into 7A/09 folder. If you search for 7A09BF85-9E98-44ea-9AB5-A13953E88C3D you look whether 7A/09/7A09BF85-9E98-44ea-9AB5-A13953E88C3D file exists. – sharptooth Oct 16 '09 at 12:29
1

I would try and keep the # of files in each directory to some manageable number. The easiest way to do this is name the subdirectory after the first 2-3 characters of the GUID.

cletus
  • 616,129
  • 168
  • 910
  • 942
1

Construct n level deep folder hierarchy to store your files. The names of the nested folders will the first n bytes of the corresponding file name. For example: For storing a file "63f4c070-0ab2-102d-adcb-0015f22e2e5c" in a four level deep folder hierarchy, construct 6/3/f/4 and place this file in this hierarchy. The depth of the hierarchy depends on the maximum number of files you can have in your system. For a few million files in my project 4 level deep hierarchy works well.

I also did the same thing in my project having nearly 1 million files. My requirement was also to process the files by traversing this huge list. I constructed a 4 level deep folder hierarchy and the processing time reduced from nearly 10 minutes to a few seconds.

An add on to this optimization can be that, if you want to process all the files present in these deep folder hierarchies, then instead of calling a function to fetch the list for the first 4 levels just precompute all the possible 4 level deep folder hierarchy names. Suppose the guid can have 16 possible characters then we will have 16 folders each at the first four levels, we can just precompute the 16*16*16*16 folder hierarchies which takes just a few ms. This save a lot of time if these large number of files are stored at a shared location and calling a function to fetch the list in a directory takes nearly a second.

prakhar3agrwal
  • 316
  • 3
  • 12
0

Sorting the audio files into separate subdirectories may slower if dir_index is used on the ext3 volume. (dir_index: "Use hashed b-trees to speed up lookups in large directories.")

This command will set the dir_index feature: tune2fs -O dir_index /dev/sda1

sambowry
  • 2,436
  • 1
  • 16
  • 13