0

I need a network file system that can be accessed from multiple machines as the same time and that it will still be able to keep like 100.000 subdirectories of a single directory.

In case someone is wondering why having these requirements: the server (JIRA) is storing attachments for each issue inside a subdirectory with the issue number. If you have a project with 100.000 issues or more you can easily end-up having to deal with this number of directories.

In order to deal with this, some time ago we switched from NetApp file-system to XFS because XFS supports this number of files/directories.

Still, we do have another problem: XFS does not allow concurrent access from different machines, not even for READ operations and we do want a solution that would work more like NFS, being able to have several machines that can access these files.

The amount of disk operations is quite low, mostly for read and files are almost never updated.

What can we use for this?

sorin
  • 8,016
  • 24
  • 79
  • 103

1 Answers1

2

If XFS works for you, you can share it via NFS.

If you prefer to us a NetApp you should use a system that supports Ontap 8.1. With this release the WAFL restriction of ~100k subdirectories for one directory has been lifted (see KB ID: 3012261 for details.

If 8.1 is not an option, you might want to check the if you can manipulate MaxDirSize as discussed on the NetApp Forums

If you goal is SAN access, you need to use some kind of Cluster-FS like OCFS2, ACFS, GFS or similar but this opens a new can of worms.

jmk
  • 140
  • 6