I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
-
Possible duplicate of [MySQL - can I limit the maximum time allowed for a query to run?](https://stackoverflow.com/questions/4794747/mysql-can-i-limit-the-maximum-time-allowed-for-a-query-to-run) – Jed Fox Aug 18 '19 at 21:15
6 Answers
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time
in MySQL 5.7.8. (source)

- 19,087
- 4
- 72
- 54
-
Will it be active only for this query or all following queries sharing the same session (process id)? – mgutt Feb 06 '17 at 20:19
-
It only applies to the query in which it is specified. If you go to my cited source, you can set a global or session max_execution_time. – Westy92 Feb 06 '17 at 20:35
-
Yes I read that, but there is not explanation what `session` means because for example `KILL` explicitly needs the modifier `QUERY`: https://dev.mysql.com/doc/refman/5.7/en/kill.html And in this answer `session` was explained as an other word for `connection`: http://stackoverflow.com/a/8800971/318765 – mgutt Feb 06 '17 at 21:02
-
2I think there is an error in this answer. When I try putting `MAX_EXECUTION_TIME=1000` inside the `SELECT` itself, that yields a `ERROR 1064 (42000): You have an error in your SQL syntax` error. `set MAX_EXECUTION_TIME=1000;` first, then the `SELECT` statement works. Now it is set in the session, and probably needs to be cleared again with `set MAX_EXECUTION_TIME=0;` Also, it doesn't look like that syntax is supported in the [select syntax documentation](https://dev.mysql.com/doc/refman/5.7/en/select.html) – Peter V. Mørch Oct 06 '17 at 18:45
-
8you will have to rewrite that query like select /*+ MAX_EXECUTION_TIME(1000) */ * from table – Manu Mohan Jan 10 '18 at 15:23
-
A similar feature was introduced in Percona Server 5.6.13 (a branch based on MySQL 5.6): https://www.percona.com/doc/percona-server/5.6/management/statement_timeout.html – Bill Karwin Jul 02 '19 at 20:22
-
1I edited the answer with @ManuMohanThekkedath 's suggestion, confirmed here: https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time – Gregory Cosmo Haun Oct 24 '20 at 00:58
If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
- The actual query execution
- while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.

- 31,486
- 7
- 73
- 96
-
This is incredible.....I can't wait to try it.....great for graph data queries for example.... – Amy Neville Jan 31 '16 at 18:51
-
5this won't stop a mysql query that's in process on the sql server. an excellent solution to a different question, though. – TheSatinKnight Sep 08 '17 at 21:04
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time

- 1,694
- 2
- 20
- 31
-
5Can you explain what this does? Does it kill the query after 1 second returning nothing, or is it nice and return the partial result set after 1 second? – kurdtpage Aug 08 '19 at 21:55
-
1@kurdtpage it kills the query and the client will error out. Note that this is only "execution time", if the query has executed inside MySQL and is currently only sending results to the client, it will run for as long as it wants – Juliano Aug 02 '20 at 22:22
You can find the answer on this other S.O. question:
MySQL - can I limit the maximum time allowed for a query to run?
a cron job that runs every second on your database server, connecting and doing something like this:
- SHOW PROCESSLIST
- Find all connections with a query time larger than your maximum desired time
- Run KILL [process id] for each of those processes
Update in May 2023: check pt-kill by Percona as recommended by Rick James on the answer below.

- 1,397
- 15
- 21
pt_kill
has an option for such. But it is on-demand, not continually monitoring. It does what @Rafa suggested. However see --sentinel
for a hint of how to come close with cron
.

- 135,179
- 13
- 127
- 222
You can run the following one-liner from the Linux command line, and even stick it into your crontab to run at regular intervals.
This example will instantly kill all SELECT queries which are running for at least 1 hour (3600 sec):
mysql -e "SELECT group_concat(concat('KILL ',id,';') separator ' ') AS cmd FROM information_schema.processlist WHERE user NOT IN ('slave','replica') AND info LIKE 'SELECT %' AND time > 3600;" | grep KILL | mysql
From the above, you may want to replace:
NOT IN (["safe list" of mysql users you NEVER want to touch; e.g. replication users])
time > [N seconds for slow queries you want to target]
Note: blindly running KILL commands can lead to errant effects. Use with extreme caution!

- 520
- 6
- 20