0

Our web application stores a few million files for long term archiving. We're looking into setting up multiple web servers for redundancy and load balancing purposes, so we need a way to store files in a way that multiple web servers can read and write files.

Two servers would never write to the same file at the same time. In fact, we write once and never modify files. We also rarely read files, as it's mostly an archive system.

I was thinking of using an NFS/SMB share, but someone at my company mentioned that NFS/SMB has severe performance problems when the share contains millions of files.

All I've been able to find are performance problems related to super large directories, but since our directories are segregated by year/month/day/hour (2017/04/24/18), our directories don't get that big (and we could easily segregate files further if necessary).

Are there any known issues with SMB or NFS shares that host tens of millions of files? Is there a better option?

Brandon
  • 426
  • 5
  • 17

0 Answers0