17

I'm working through some optimization work, and I've noticed that in some mysql dumps people post in articles and questions (which I cannot find again now that I'm actually looking), there are high-precision execution times (0.05985215 sec instead of 0.06 sec).

How can I see these more precise times for my queries on the command line?

EDIT

Example of this is:

+----------+
| COUNT(*) |
+----------+
| 11596    |
+----------+
1 row in set (0.05894344 sec)

Using profiling gets me part of the way there, but produces an output too long, and I have to remember to enable it. I am just looking for a simple high-precision duration.

SET profiling = 1;
<query>
SHOW PROFILES;

Gives me something like this:

+----------------------+-----------+
| Status               | Duration  |
+----------------------+-----------+
| (initialization)     | 0.000005  |
| checking permissions | 0.00001   |
| Opening tables       | 0.000499  |
| Table lock           | 0.000071  |
| preparing            | 0.000018  |
| Creating tmp table   | 0.00002   |
| executing            | 0.000006  |
| Copying to tmp table | 6.565327  |
| Sorting result       | 0.000431  |
| Sending data         | 0.006204  |
| query end            | 0.000007  |
| freeing items        | 0.000028  |
| closing tables       | 0.000015  |
| logging slow query   | 0.000005  |
+----------------------+-----------+
14 rows in set (0.00 sec)
Ben Dauphinee
  • 4,061
  • 8
  • 40
  • 59
  • mind to post links to the _articles and questions that contain high-precision execution times (0.05985215 sec instead of 0.06 sec)_? i cannot seem to find any. guess the fastest way to answer your question is to just ask the poster of those articles and questions how they did it. – ax. May 25 '11 at 12:48
  • 1
    The real question is whether it is supported by the server. Profiling will show you the CPU time it took for each step, not including **concurrent queries** running at the same time. So we have to check if we stil have some **getrusage** without profiling. – vbence May 25 '11 at 15:15

4 Answers4

15

It seems that the best answer to this is to enable profiling. There have been no other leads that pan out.

Best answer, use query profiling.

SET profiling = 1;
<query>
SHOW PROFILES;
Ben Dauphinee
  • 4,061
  • 8
  • 40
  • 59
10

this question is best answered by looking at the source of the mysql command line client. the relevant piece of code,

static void nice_time(double sec,char *buff,bool part_second)
{
  // ...
  if (part_second)
    sprintf(buff,"%.2f sec",sec);
  else
    sprintf(buff,"%d sec",(int) sec);
}

has the number of digits after the decimal point for the sec value hard-coded into (2). this would make me conclude that higher precision times are not possible with a stock mysql install.

of course, you could patch this code, make it configurable, etc, and install from source. i guess this is what the people in the articles and questions you mentioned are doing. your best chance to find out is to just ask them (see my comment to your question).

ax.
  • 58,560
  • 8
  • 81
  • 72
  • @ax Apparently I only know enough to solve the first part of this. I've managed to increase the number of significant digits, but I think that the result is being truncated somewhere else before it is fed into here. – Ben Dauphinee May 25 '11 at 17:32
  • @Ben Dauphinee did you ask the guys who use the _high-precision times_ how they did it? that should be easier than me trying to patch it for you :) – ax. May 25 '11 at 21:18
  • @ax I've posted a thread in the MySQL forums to ask if or how this can be done. I'll be cross-posting the results when I get them, and accepting your answer b/c it lead in that direction. – Ben Dauphinee May 25 '11 at 21:23
  • Tools like phpMyAdmin allow custom functions to be added through the GUI. You could simply fork it, add it in the editor and then call it from other parts in the GUI. – kaiser Jun 03 '14 at 06:22
0

Without seeing the dumps you're talking about, it's probably a user defined function? See this thread ( http://lists.mysql.com/internals/33707 ) for a couple of gotchas and how to do it.

Dycey
  • 4,767
  • 5
  • 47
  • 86
  • But that will only tell you how much time it took for the call to return. Just the same thing like you saved the microtime before and after the query in the program. - In other words if a slow query runs on a different thread it will also increase the running time of your query. - A real solution has to be **getrusage()** based, only to count CPU time spent in your thread (running your query). – vbence May 25 '11 at 14:36
-1

Not elegant, but working solution is to patch /usr/bin/mysql:

# copy the original mysql binary to your home dir
cp /usr/bin/mysql ~/mysql
# patch it
sed -i -e 's/%.2f sec/%.8f sec/1' ~/mysql 
# run it from the home directory
~/mysql 

It works because CURRENTLY there is only one format string '%.2f sec' in the mysql binary, BUT it may change over time. You may revert to the original binary by applying reverse patch:

sed -i -e 's/%.8f sec/%.2f sec/1' ~/mysql
Nopius
  • 152
  • 6
  • 2
    For me this is printing out results like "2 rows in set (0.00000000 sec)". But the SHOW PROFILES says it's taking about a millisecond. So I suspect the source data feeding this is not high enough resolution for this to actually help. – Jesse Pangburn Jul 01 '16 at 01:06