slow_log and general_log tables in MySQL 5.1

“Server log tables” is one of the finest feature in MySQL 5.1 version but It looks like people are not using this feature regularly or perhaps they are happy with files only. Here, I’m explaining how we can enable that feature, use it and how it will be helpful for filtering the required statements in slow-log/general-logs.

Generally, for enabling slow logs or general logs, we have to set below parameters in my.cnf.

For slow logs:
slow_query_log        = 1/ON
log_slow_queries        = /var/log/mysql/mysql-slow.log (location)
long_query_time         = 1 (values in seconds)

For general logs:
general_log             = 1/ON
general_log_file        = /var/log/mysql/mysql.log

From MySQL 5.1,  they have introduced one more parameter called “log_output” for server log tables where you can set that which kind of output you want. For the log_output parameter value can be a TABLE (log to tables), FILE (log to files), or NONE (do not log to tables or files).

Here, you can also set multiple values with coma means if you want to log statements in both TABLE and FILE than you can set variable like

mysql> SET GLOBAL log_output = 'TABLE, FILE';
Query OK, 0 rows affected (0.00 sec)
mysql> show global variables like '%output%';
| Variable_name | Value      |
| log_output    | FILE,TABLE |
1 row in set (0.00 sec)

So now when any query will run on the server and will take more than one second than it will be logged in both the places. I have tested it with creating tmp table in my local pc and run some statement on it. In the file it looks like

root@nilnandan-HP-ProBook-4410s:/var/log/mysql# tail -7  mysql-slow.log
# Time: 120321 15:04:30
# User@Host: root[root] @ localhost []
# Query_time: 6.036248  Lock_time: 0.000119 Rows_sent: 5  Rows_examined: 10485765
SET timestamp=1332322470;
select distinct id from nil_test;

In the table, it looks like

mysql> select * from mysql.slow_log G;
*************************** 1. row ***************************
    start_time: 2012-03-21 15:04:30
     user_host: root[root] @ localhost []
    query_time: 00:00:06
     lock_time: 00:00:00
     rows_sent: 5
 rows_examined: 10485765
            db: nil
last_insert_id: 0
     insert_id: 0
     server_id: 0
      sql_text: select distinct id from nil_test
2 rows in set (0.00 sec)

No query specified


From the DBA perspective, slow log details in table is far better than in regular slow log file. It would be so much easy for you to get specific details from the slow_log table by running simple sql queries. But It would be so difficult if you have to get these details from OS file. Specially when you have very big slow log.

i.e For getting queries from slow log between 10:00 am to 11:00 am today, you can simply run query like

SELECT query_time, lock_time, row_sent, rows_examined, sql_test FROM mysql.slow_log WHERE start_time BETWEEN ‘2012-03-21 10:00:00’  AND ‘2012-03-21 11:00:00′;

For getting queries, which took time more than 5 seconds than

SELECT query_time, lock_time, row_sent, rows_examined, sql_test FROM mysql.slow_log WHERE query_time > ’00:00:05’;


7 thoughts on “slow_log and general_log tables in MySQL 5.1

  1. The log tables *are* useful, and I make use of the general_log table in oak-hook-general-log to make a “tail -f” on the structured geenral log.
    However, you write: “Specially when you have very big slow log.”
    Note that writing to log tables is much slower than writing to log file, so be careful.
    Also, log tables are limited in size (actually in number of rows).

    1. Agree with you that writing into tables is slower than writing into file but generally we are not getting that much slow queries which writing in table can affect to performance. Can you please tell me what are the limits for log tables? I didn’t find that info in mysql documentation.

  2. I really believe if a DBA is serious about slow queries, they use a tool like pt-query-digest rather then writing something themselves to process statistics and all that jazz and so having the data in the database is of limited use IMHO.

  3. I think having slow log as a `TABLE` output may be good idea but for general log `TABLE` output does not make sense.
    Imagine your production server is busy, server will spend most of it’s resources in writing to general log `TABLE` itself which in turn can reduce performance drastically.

    @Rob Smith
    I agreed that a DBA not only want to see queries logged in between specific time but also how many queries are repeating, query pattern, filter based on MySQL user etc.. can be useful.
    I am using MONyog which can read log (slow and general) from `FILE` as well as `TABLE` and gives me aggregated result-set using which I can easily determine problematic queries.
    I am not saying not to use pt-query-digest but for those who likes GUI for them it’s worth trying.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.