0

I only have a database , which is around 40G. I did have some problems with replication earlier....could it be some sort of log/files that are eating it up?

How do I check why the space is taken up?

I'm using Ubuntu, the latest version. I don't have a GUI. everything is command line. The only thing that is running is mysql.

Alex
  • 8,471
  • 26
  • 75
  • 99

8 Answers8

4

This tends to work better (start in / if you're root):

du -sm * | sort -nr

That way you get your top offenders at the top of the list. Then you can drill down on the obvious offenders first and find the true source of your full disk issues.

In general, it is real handy to set up cron scripts using that to audit things like home and share directory trees. Run that in home, and you can easily identify the logins of the file hogs...

Things like running find are ok, but what happens if you have an app that is rotating (but not deleting) logs on a daily basis, no one knew about it, and 3 years have gone by? Small files add up too...

Corey S.
  • 2,487
  • 1
  • 19
  • 23
  • 1
    Doing a 'df -h' first to see which file system is more full can narrow it down some more, and the running the above command to find the culprits. – Knut Haugen Dec 16 '09 at 21:29
1

The du (disk usage) command is your friend. Try something like:

du -h --max-depth=1 /home

...where -h is "human readable" and -max-depth controls how many subdirectories deep you'd like to go.

Nexus
  • 850
  • 1
  • 8
  • 19
1

Yes, your binlogs take up a lot of space - especially right after setting up replication.

You can configure how long these are kept around in /etc/mysql/my.cnf, using expire_log_days

Note that if there is a problem with your binlog index file, this setting will appear to not work. I believe you can just manually ensure the contents of the index file match all of the existing binlog file names to resolve this.

Brent
  • 22,857
  • 19
  • 70
  • 102
  • How do I know where my bin-logs are located? – Alex Dec 16 '09 at 20:45
  • Look in your /etc/mysql/my.cnf file for a log_bin variable. That will tell you where they are being stored. I think for Ubuntu they are under /var/log/mysql by default. – Brent Dec 16 '09 at 22:22
  • also, you could run "du -cks *|sort -rn|head" in your /var/log subdirectory to determine exactly how much space your logfiles are actually taking up - and confirm whether or not this is your problem. – Brent Dec 16 '09 at 22:24
0

As Nexus suggested, du is your friend. I personally use du -hs *, which will list the total size of a file or directory, including all subdirectories. Then just rinse and repeat for your largest directory(s), to drill down to where that disk space is being used.

Alternately, find works too. find . -size +1G will show you all the individual files that are larger than 1GB.

Christopher Karel
  • 6,582
  • 1
  • 28
  • 34
0

I'd run a report against everything on the entire partition:

find /foo -mount -type f -print0 | xargs -0 du -sk | sort -rn | less

This will give you a sorted list of all files, starting with the largest at the top, in KB and without crossing onto other mounted drives. If you only have a single, huge / partition, then replace "/foo" with "/". More often than not, you have a small number of large files that are eating up space, such as log files, core files, or crash dumps.

It will really pound the server, so either nice it and/or run it when the machine can handle the extra load.

Geoff Fritz
  • 1,727
  • 9
  • 11
0

My pc eats up his harddisk live... Once I remove files the hard disk keeps beeing filled up.. I can't update, can't sudo, can't do anything except removing files, and afterwards the space is filled up again... This seems to be the first ubuntu virus.

0

ncdu is a nice alternative to the command-line du.

Anonymous
  • 1,550
  • 1
  • 14
  • 18
0

kinda convoluted looking, but I've used it to track down disk utilization problems on our Oracle boxes:

tree -s -f > /tmp/out1 && sleep 5 && tree -s -f > /tmp/out2; diff /tmp/out1 /tmp/out2 | egrep "\|--" | awk -F[ '{print $2}' | awk -F] '{print $2 }' | sort | uniq | xargs fuser f | xargs ps -lFp
Greeblesnort
  • 1,759
  • 8
  • 10