1

I have a ZFS filesystem (ZFS version 0.8.5) on Linux (kernel 3.10.0) for which I'd like to restrict the total path length on nested directories. I'm not sure if there's a way to do that, though.

My backup software appears to have some limit just short of 4096 characters. To work around that, I'd like to have something in place where attempts to create a directory (or filename) that would result in a path length of more than 4000 characters will fail. (If this is a per-filesystem setting, the limit would have to be lower, since I'll have to take the length of the filesystem mountpoint's path into account.)

Is there a way to do that, either with the Linux kernel, ZFS module parameters, or ZFS filesystem properties? (Or some other avenue?)

Note that Linux's PATH_MAX value is not a solution here. PATH_MAX on my system is 4096, but I can easily create directories whose full paths exceed that limit. e.g.:

mkdir -p $(python -c 'print("/".join(["n" * 255] * 512))')

That will, without error, create a directory with 131071 characters from the current directory.

asciiphil
  • 3,086
  • 3
  • 28
  • 53
  • Are you sure that your software has an arbitrary hard-coded limit vs. is simply using `PATH_MAX`, `getconf PATH_MAX /[...]` or related internally? In the latter case increasing one of those limits might indeed help. Your software might not have properly considered ZFS maybe ignoring those limits. – Thorsten Schöning May 03 '21 at 15:03
  • I suspect it's a hardcoded limit. I'm basing that on the fact that the software segfaults when trying to work with directories with path lengths very close to 4096 characters. (I'm not totally sure how I'd test one case versus the other, though; I'm not finding a lot of info about forcing a different `PATH_MAX` for a filesystem unless I were writing my own driver.) I'm opening a bug report with the vendor, but I want to have a workaround available in case they won't fix their code in a timely manner. – asciiphil May 03 '21 at 17:55
  • I can as well only think of arbitrary difficult workarounds, like monitoring all your dirs using tools like `inotify` and do something for too long file paths instantly. https://wiki.ubuntuusers.de/inotify/ Or check your backup software for command line support and call it with known good dirs only or stuff like that. It's most likely easier to replace the software in the end if it simply doesn't work properly. :-) – Thorsten Schöning May 07 '21 at 17:13
  • @ThorstenSchöning [`PATH_MAX` isn't supposed to be defined on most current POSIX systems](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/limits.h.html) (bolding mine) "A definition of one of the symbolic constants in the following list **shall be omitted** from the `` header on specific implementations where the corresponding value is equal to or greater than the stated minimum, but where the value can vary depending on the file to which it is applied. ... `PATH_MAX`" [That applies to Linux...](https://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits) – Andrew Henle May 07 '21 at 20:41
  • I ended up writing a program to check the lengths of all of the paths in the filesystem. It doesn't answer my original question here, because it doesn't prevent someone from creating a too-long path, but it does let me detect and respond to problems. (Also the software bug appears to be related to the number of directories in the path, not the total path length, but that's orthogonal to what I was asking for here.) – asciiphil Sep 30 '21 at 17:43

0 Answers0