1

I have a server and the / partition is 20GB in size.

Databases are stored in /mnt/mysql-data partition is 500GB in size.

Now here's the problem. Whenever I run mysqldump it fills up / partition to 100%. I have already moved the tmpdir to /mnt/mysql-data/tmp. My databases are around 40GB all in-all now I want to back them up in /mnt/mysql-data/backups but I can't proceed because the / partition fills up to 100%. my mysqldump command is: mysqldump --all-databases > /mnt/s3share/backup.sql";

Server Details:

  • 10.2.22-MariaDB-log MariaDB Server

  • CentOS Linux release 7.7.1908 (Core)

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
tmpfs           7.8G  217M  7.6G   3% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/xvda2       24G  2.4G   20G  11% /
/dev/xvda1      976M  168M  757M  19% /boot
tmpfs           1.6G     0  1.6G   0% /run/user/1000
/dev/xvdc1      500G  123G  378G  25% /mnt/mysql-data
tmpfs           1.6G     0  1.6G   0% /run/user/1001
MariaDB [db_inbox]> show global variables like "%tmp%";
+----------------------------+----------------------+  
| Variable_name              | Value                |  
+----------------------------+----------------------+  
| default_tmp_storage_engine |                      |  
| encrypt_tmp_disk_tables    | OFF                  |  
| encrypt_tmp_files          | OFF                  |  
| innodb_tmpdir              |                      |  
| max_tmp_tables             | 32                   |  
| slave_load_tmpdir          | /mnt/mysql-data/tmp  |  
| tmp_disk_table_size        | 18446744073709551615 |  
| tmp_memory_table_size      | 16777216             |  
| tmp_table_size             | 16777216             |  
| tmpdir                     | /mnt/mysql-data/tmp  |  
+----------------------------+----------------------+  
10 rows in set (0.00 sec)                              

update #1:

I forgot to mention that the *.sql backups are being written to /mnt/s3share/backups/ folder which is mounted as s3fs and its cache must be writing to /tmp and that may be a reason why / is being filled up while the sql dump is being created. However, when I run the backup and watch /tmp for changes, it doesn't show any growth. but when I do lsof command on /tmp I can see huge files being deleted. Could this be it?

Andrew Gaul
  • 262
  • 1
  • 7

3 Answers3

1

Look at your innodb_% variables and datadir. There could be something like a tablespace still somewhere else.

Failing that, look at the directory where the root bloat turns up using du while mysqldump is running.

Gordan Bobić
  • 971
  • 4
  • 11
1

OK, I was able to solve this problem by moving the s3bucket cache dir from /tmp to /mnt/mysql-data/tmp.

Little did I know that fuse.s3fs was writing to /tmp and there's no way to track which file is growing using du -h /tmp

the code I was running was mysqldump --all-databases > /mnt/s3share/backup.sql where s3share is mounted using fuse.s3fs and has a cache directory targetting /tmp. this is why i thought mysqldump was causing all the growing storage use on root /.

after changing the cache dir of fuse.s3fs to /mnt/mysql-data/tmp, the problem was solved.

this was my mount command before /tmp:

datastore /mnt/s3share fuse _netdev,allow_other,use_cache=/tmp,passwd_file=$PASSWDFILE 0 0

then this is the new mount command /mnt/mysql-data/tmp:

datastore /mnt/s3share fuse _netdev,allow_other,use_cache=/mnt/mysql-data/tmp,passwd_file=$PASSWDFILE 0 0

  • Strange how the /mnt/s3share is missing from the list of mounts in your original question. – Gerard H. Pille Apr 21 '20 at 08:25
  • yes i never thought it was going to be a main concern. i only realized recently that it could pose a big issue. – Christian Noel Apr 21 '20 at 08:30
  • i also added it under the update #1 – Christian Noel Apr 21 '20 at 08:31
  • 1
    That's what makes solving problems on StackExchange a challenge. Not the problem itself, it would have taken me a couple of minutes, but the contributers holding back information. – Gerard H. Pille Apr 21 '20 at 09:57
  • mysqldump -u$DB_USER --lockall-tables=false "$dbname"databases > "/mnt/mysql-datas3share/$dbnamebackup.sql" was originally mysqldump -u$DB_USER --lock-tables=false "$dbname" > "/mnt/mysql-data/$dbname.sql" – Gerard H. Pille Apr 21 '20 at 10:03
  • sorry about that. i was trying to limit information to the most basic i can and i thought it was mainly mysqldump that was causing an issue. – Christian Noel Apr 22 '20 at 00:27
0

If you use s3fs 1.88 or later, it defaults to flushing the partial file to s3fs every 5 GB, controlled via -o max_dirty_data. This allows uploading files larger than temporary storage.

Andrew Gaul
  • 262
  • 1
  • 7