0

How can I maximize the limit, so I can upload around 1.000.000 images of book covers into a server running a website?

Our server has 100 GB disk space and the images are under 10 GB, so why does my server stop extracting files halfway due to lack of space?

aseq
  • 4,610
  • 1
  • 24
  • 48

2 Answers2

4

You're running out of inodes. You can verify this by running $ df -i.

As far as I know, there's no way to increase the number of inodes on an already-existing filesystem. You can, however, specify a higher number of inodes at fs creation time using the -N flag for mkfs.ext4.

EEAA
  • 109,363
  • 18
  • 175
  • 245
2

As EEAA said, this is definitely an inode problem. This is something you need to fix at a filesystem level. If you can't recreate the while filesystem, for example if you have all your data in one partition mounted at / [note: this is usually bad!], you can shrink the root filesystem, then create a second filesystem with appropriate inode counts from the free space, and mount it somewhere sensible (e.g. /var/www).

An ideal server will have multiple partitions for the different main filesystems. I recommend reading https://www.debian.org/releases/stable/amd64/apcs03.html.en as it's a good reference, if a little outdated: I prefer using ext4 /, /var, and /var/log partitions myself, with a "tmpfs" /tmp and additional data partitions as needed for different systems (e.g. /var/vmail on the mail server, /var/www on the web server, /home on desktops, etc.). Remember there are a lot of system files in / that contribute to your inode count!

Also ensure that those pictures have some sort of folder organization structure. Anything more than a few thousand files in one directory level (folder) tends to cause reads and "ls", etc. to choke in extX filesystems. You're best to organize them as much as you can before attmpting to upload.

Joshua Boniface
  • 346
  • 3
  • 14