0

I have a pipeline on my server which creates a lot of hard links to big files. today I realized it fails and shows this error:

ln: failed to create hard link to '/path/to/a/file': Too many links

The filesystem is ext4 and dir_nlink is enabled:

> sudo dumpe2fs -h /dev/sdb1 | grep "Filesystem features"
dumpe2fs 1.44.1 (24-Mar-2018)
Filesystem features:      has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum

What is the problem and how can I solve it? any help would be greatly appreciated.

asalimih
  • 101
  • 1
  • any file system has restrictions. one of them is a limited amount of links. but due i dont see here which is used I can't tell you how much it will be – djdomi Jul 05 '22 at 20:27
  • @djdomi as I mentioned the filesystem is ext4. Is there a command I can use to provide the information you need? – asalimih Jul 05 '22 at 21:18
  • I did not seen that ext4 was stated, however you may take a look [here on unix stack](https://unix.stackexchange.com/questions/5629/is-there-a-limit-of-hardlinks-for-one-file) it's basically answers it imho – djdomi Jul 06 '22 at 21:01

1 Answers1

1

In ext4 file systems the limit of hard links to a file that can be created is 65000 (rounding of 16bits)

LUCACAN
  • 11
  • 1