12

I'm creating MySQL dump of 5 databases every hour. 3 of those databases very rarely change, so creating a MySQL dump for those 3 dbs is pretty much a waste of time and resources.

Is there a way I can retrieve a unix epoch seconds of when a specific db was last changed/updated? I would compare it with latest dump file and only dump another one if there are changes.

So question again: How can I get the unix epoch datetime of last update/change of a specific database?

John Gardeniers
  • 27,458
  • 12
  • 55
  • 109
user80666
  • 459
  • 2
  • 6
  • 15

8 Answers8

6

Only works for MyISAM tables

You can run a MySQL query against the information_schema table:

Example (replace dbname with your database name):

SELECT UNIX_TIMESTAMP(MAX(UPDATE_TIME)) as last_update 
FROM information_schema.tables 
WHERE TABLE_SCHEMA='dbname' 
GROUP BY TABLE_SCHEMA;
Philip Reynolds
  • 9,799
  • 1
  • 34
  • 33
  • I did a simple test with MySQL's world DB (updated the population of a City) and that query returns NULL after the update. – HTTP500 Jun 09 '11 at 22:08
  • Db name is 'world' ? Could you post up the output of `SELECT UPDATE_TIME FROM information_schema WHERE TABLE_SCHEMA='world'` onto codepad.org? – Philip Reynolds Jun 09 '11 at 22:11
  • @Phil, I think you meant FROM TABLES? Anyway, here is the output: http://codepad.org/WP14ErTk Yes, the DB name is world. – HTTP500 Jun 09 '11 at 22:24
  • Running this on each database on my server I get a complete mix of results, with some databases returning a value and others returning either NULL or nothing at all. Not exactly something that can be relied on. – John Gardeniers Jun 09 '11 at 22:58
  • 1
    This seems to vary depending on table type and platform. i.e. it doesn't work for InnoDB tables. http://stackoverflow.com/questions/5622178/information-schema-tables-and-update-time-behaviour – Philip Reynolds Jun 09 '11 at 23:03
  • @Phil My tables are indeed InnoDB – HTTP500 Jun 09 '11 at 23:09
4

I use:

    mysql -e "use <NAMEOFTHEDATABASE>;SELECT MAX(UPDATE_TIME) FROM information_schema.TABLES WHERE TABLE_SCHEMA = DATABASE() ;" | grep -v "\----" | grep -v "MAX(UPDATE_TIME)" | awk '{print $1}'

Regards!

user217524
  • 41
  • 1
  • 1
    I only see entries for MyISAM tables, all the innodb tables have update_time as null. – slim Jun 11 '18 at 19:37
3

it's not exactly answer to what you are looking for, but i think that's what you need: enable binary logging, backup binlogs and create full dumps once a week or so.

pQd
  • 29,981
  • 6
  • 66
  • 109
2

The way I do this is not automated: you have to still read the modified times, but FWIW...

SELECT TABLE_SCHEMA, UNIX_TIMESTAMP(MAX(UPDATE_TIME)) as last_update  FROM information_schema.tables  WHERE TABLE_SCHEMA in ('shovel', 'sleek', 'whiskerando', 'pickle')  GROUP BY
TABLE_SCHEMA;

This will list the the modified times for the four tables shown in parentheses.

2

There is a tool from Maatkit that can do quick checksums on tables. It is mk-table-checksum

You could probably lock the tables, run mk-table-checksum and store the checksums, then unlock the tables and then look at the values to see if you need to run mysqldump or not.

Ben Pilbrow
  • 12,041
  • 5
  • 36
  • 57
HTTP500
  • 4,833
  • 4
  • 23
  • 31
  • Would work although it probably would place the same kind of stress onto the database as an entire mysqldump would do. – the-wabbit Jun 10 '11 at 07:27
  • @the-wabbit Not sure about that: Check the [doc](https://linux.die.net/man/1/mk-table-checksum): "..Checksums typically take about **twice as long as `COUNT (*)`** on very large InnoDB tables in my tests. For smaller tables, `COUNT (*)` is a good bit faster than the checksums". By the way, it has been replaced (in ubuntu at least) by [pt-table-checksum](https://www.percona.com/doc/percona-toolkit/LATEST/pt-table-checksum.html) from [percona toolkit](https://www.percona.com/doc/percona-toolkit/LATEST/index.html) – Marinos An Jul 29 '20 at 09:13
1

I found this metod, from http://mysqladministrators.blogspot.it/2012/02/get-database-size.html

I'm not sure if it could help you, since i'm not so prepared in MySql

Get the database size, free space and last update

To get the current database size just by querying into your query browser or CLI from the INFORMATION_SCHEMA database in table TABLES.

SELECT table_schema "Data Base Name", 
sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" 
FROM information_schema.TABLES 
GROUP BY table_schema ;

Get the database free space

SELECT table_schema "Data Base Name", 
sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB", 
sum( data_free )/ 1024 / 1024 "Free Space in MB" 
FROM information_schema.TABLES 
GROUP BY table_schema;

Get the database last update ordered by update time then by create time.

SELECT MAX(UPDATE_TIME), MAX(CREATE_TIME), TABLE_SCHEMA 
FROM `TABLES` 
GROUP BY TABLE_SCHEMA 
ORDER BY 1, 2;
1

Due to not being able to get update_time for innodb tables I check the db files directly:

find /var/lib/mysql -name \*.frm | xargs ls -l --time-style="+%m-%d-%Y"  | awk '{print $6,$7}'

This gives me days which I use to verify tables are inactive and can be archived.

You are looking for unix timestamp which just needs a change to ls's time-style:

find /var/lib/mysql -name \*.frm | xargs ls -l --time-style="+%s"  | awk '{print $6,$7}'

This assumes your database files are in /var/lib/mysql but you can get that location by running the following query:

mysql -e "SHOW VARIABLES LIKE 'datadir';"
DDS
  • 111
  • 1
  • 1
    Wouldn't it be better to check the *.MYD files? – simon Mar 20 '21 at 21:31
  • It would be a bit better for readability and sortability. Active tables have very recent time stamps and inactive tables stick out. – DDS Mar 31 '23 at 17:23
0

This question comes up from time to time and the short answer is that this information cannot be reliably obtained by any one method for the simple reason that it is not necessarily recorded.

For MyISAM tables you can use the query Phil posted. I don't know about any of the others but for InnoDB at least there is no query that will return the information you want. In this case you might consider using triggers to record the timestamp in a table whenever the data is changed but realistically the performance loss will most likely be greater that just going ahead with the dump.

John Gardeniers
  • 27,458
  • 12
  • 55
  • 109
  • So I guess I need to know "How do others do it"? Is binary logs the answer? – user80666 Jun 10 '11 at 00:46
  • 1
    @user80666, I don't know what the answer is, although it's worth editing your question to let people know what DB engines you're dealing with so as to try and get better answers. Based on your concerns about performance I'd suggest simply setting up a slave and running your backups from that. – John Gardeniers Jun 10 '11 at 00:53