-1

I'm trying to find the largest files on my 25GB Linux server which has been steadily running out of space and is now 99.5% full. I assumed it was log files since I wasn't doing anything with the sites, and the database sizes are small and static.

Log files were a 100MB or so, nothing major.

I've tried the command found here (https://www.cyberciti.biz/faq/linux-find-largest-file-in-directory-recursively-using-find-du/) to recursively find the biggest files but its not giving me anything useful:

root@127:~# du -a / | sort -n -r | head -n 20
du: cannot access '/proc/12377/task/12377/fd/4': No such file or directory
du: cannot access '/proc/12377/task/12377/fdinfo/4': No such file or directory
du: cannot access '/proc/12377/fd/3': No such file or directory
du: cannot access '/proc/12377/fdinfo/3': No such file or directory
sort: write failed: /tmp/sortnI7YzR: No space left on device

I'm a Linux novice so would appreciate any help.

Jon
  • 452
  • 4
  • 23

2 Answers2

1

Try du -a *| sort -n -r | head -n 20 if were going by your own method of sorting files.

There are also other ways to make it more readable in terms of reading the memory output. You could du -sh *| sort -hr | head -n20 as well.

Just to add to this, if you are running out of space and you want to see the amount of space every file is using, the df command is really useful. Check out the man page for it and try df -h to see the available disk space and which files are using up the most.

binjamin
  • 123
  • 1
  • 13
1

You need not to search in /proc and /dev as they are 'virtual' files thus nothing useful to look for there (just a huge loss of time)

As you seem to look for standard files, I would suggest to use find

find / \( -path /proc -prune -a -path /dev -prune \) -o -type f -size +100M -exec ls -s1 {} \; 2>/dev/null| sort -n -r | head -n 20

Here you may see that I use option -size +100M that tells find to look for files larger than 100M assuming you are looking for big files. You may remove this option but it will be much longer.

OznOg
  • 4,440
  • 2
  • 26
  • 35
  • Thanks - I changed it to 10M but it's only pulling back files totalling about 3.6GB: https://pastebin.com/raw/x26iRN9u – Jon Nov 04 '19 at 14:03
  • I don't understand, is that ok? – OznOg Nov 04 '19 at 18:25
  • Well the disk is 25GB so was expecting to see larger files. Unless there's long-tail of under 10MB files that makes up most of it? – Jon Nov 04 '19 at 19:13
  • If you want the full list, you may remove the last part `| head -20` that only takes the first 20 files of the list, but yes, can be just plenty of "not that big" files – OznOg Nov 04 '19 at 20:00
  • I've just done this for files of 1KB or more (so basically everything). It gives me a total of 3.8GB of data... something's not right – Jon Nov 05 '19 at 13:57
  • Update: I've fixed the issue of 100% disk space with the help of @OznOg. Simply, the disk was only 4.8GB full but it seems some process or another was claiming the space to be used. A simple reboot resolved the issue. Details here: https://www.digitalocean.com/community/questions/cannot-find-what-is-filling-up-disk-space-dev-vda1-is-100-full – Jon Nov 05 '19 at 15:02
  • careful, space on disk is free when the last file descriptor is closed on it, thus if you deleted huge files, but the processes that creates them still have opened fd on it, the space is not returned to free space – OznOg Nov 05 '19 at 16:56
  • Thanks. I hadn't deleted anything. Disk space is slowly climbing again so I need to get to the cause of that, but for now I've reclaimed all most of the space – Jon Nov 06 '19 at 18:46