记录mysql数据库的执行语句

来源:互联网 发布:方正证券软件下载 编辑:程序博客网 时间:2024/06/06 21:41

mysql> set global log_output=’table’;
Query OK, 0 rows affected (0.00 sec)
Note that this will tell MySQL to push all traced SQL into their appropriate tables, which are based off the CSV (comma separated value) engine. One nice aspect of the CSV tables that I like is that you can take the underlying files and directly import them into Microsoft Excel or OpenOffice Calc to prepare reports and the like.

Next, to enable tracing simply issue another command from a MySQL client prompt. For example, to trace all SQL sent to a MySQL Server, you can do the following:

mysql> set global general_log=1;
Query OK, 0 rows affected (0.00 sec)
You can then trace to your heart’s content and once you’re satisfied with what you’ve got, just disable tracing again:

mysql> select count(*) from gim2.broker;
+———-+
| count(*) |
+———-+
| 23 |
+———-+
1 row in set (0.00 sec)

mysql> set global general_log=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mysql.general_log\G
***************** 1. row *****************
event_time: 2009-01-21 09:53:03
user_host: root[root] @ [154.231.0.3]
thread_id: 27
server_id: 0
command_type: Query
argument: select count(*) from gim2.broker
***************** 2. row *****************
event_time: 2009-01-21 09:53:08
user_host: root[root] @ [154.231.0.3]
thread_id: 27
server_id: 0
command_type: Query
argument: set global general_log=0
2 rows in set (0.00 sec)
Admittedly, the MySQL general query log doesn’t contain a wealth of diagnostic information to use:

mysql> desc mysql.general_log;
+————–+————-+——+—–+——————-+
| Field | Type | Null | Key | Default |
+————–+————-+——+—–+——————-+
| event_time | timestamp | NO | | CURRENT_TIMESTAMP |
| user_host | mediumtext | NO | | NULL |
| thread_id | int(11) | NO | | NULL |
| server_id | int(11) | NO | | NULL |
| command_type | varchar(64) | NO | | NULL |
| argument | mediumtext | NO | | NULL |
+————–+————-+——+—–+——————-+
This isn’t to say there’s no value there – you can certainly ascertain workload patterns of queries and their originating source along with other diagnostics from parsing the argument column (e.g. how much a table is accessed, etc.) But, the slow query log/table contains better stats for troubleshooting SQL:

mysql> desc mysql.slow_log;
+—————-+————–+——+—–+——————-+
| Field | Type | Null | Key | Default |
+—————-+————–+——+—–+——————-+
| start_time | timestamp | NO | | CURRENT_TIMESTAMP |
| user_host | mediumtext | NO | | NULL |
| query_time | time | NO | | NULL |
| lock_time | time | NO | | NULL |
| rows_sent | int(11) | NO | | NULL |
| rows_examined | int(11) | NO | | NULL |
| db | varchar(512) | NO | | NULL |
| last_insert_id | int(11) | NO | | NULL |
| insert_id | int(11) | NO | | NULL |
| server_id | int(11) | NO | | NULL |
| sql_text | mediumtext | NO | | NULL |
+—————-+————–+——+—–+——————-+
Tracing ‘slow’ running SQL is similar to collecting all SQL but there are one or two extra steps depending on the criteria you want to use. First, you need to tell MySQL how ‘slow’ a query needs to be in order to be collected. This is currently indicated in seconds if you want to trace to a table, but it can also be expressed in microseconds if you want to send the output to a file. We’ll just trace to a table for now and set the criteria to one second:

mysql> set global long_query_time=1;
Query OK, 0 rows affected (0.00 sec)
Once you do this, you can then start tracing slow running SQL by issuing this command:

mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.00 sec)
Now you’ll be capturing all SQL that takes longer than one second to execute – such as this one:

mysql> select b.client_id,
-> b.client_first_name,
-> b.client_last_name,
-> (select sum(number_of_units * price)
-> from client_transaction c
-> where c.action = ‘buy’ and
-> c.client_id = b.client_id) -
-> (select sum(number_of_units * price)
-> from client_transaction d
-> where d.action = ‘sell’ and
-> d.client_id = b.client_id) portfolio_value
-> from client_transaction a,
-> client b
-> where a.client_id = b.client_id
-> group by b.client_id,
-> b.client_first_name,
-> b.client_last_name
-> having portfolio_value > 1000000;
+———–+——————-+——————+—————–+
| client_id | client_first_name | client_last_name | portfolio_value |
+———–+——————-+——————+—————–+
| 5 | ABNER | ROSSELLETT | 1252115.50 |
| 500 | CANDICE | BARTLETT | 1384877.50 |
+———–+——————-+——————+—————–+
2 rows in set (3.70 sec)
mysql> select * from mysql.slow_log\G
***************** 1. row *****************
start_time: 2009-01-21 10:10:50
user_host: root[root] @ [154.231.0.3]
query_time: 00:00:03
lock_time: 00:00:00
rows_sent: 2
rows_examined: 1323751
db: gim2
last_insert_id: 0
insert_id: 0
server_id: 0
sql_text: select b.client_id,
b.client_first_name,
b.client_last_name,
(select sum(number_of_units * price)
from client_transaction c
where c.action = ‘buy’ and
c.client_id = b.client_id) -
(select sum(number_of_units * price)
from client_transaction d
where d.action = ‘sell’ and
d.client_id = b.client_id) portfolio_value
from client_transaction a,
client b
where a.client_id = b.client_id
group by b.client_id,
b.client_first_name,
b.client_last_name
having portfolio_value > 1000000
1 row in set (0.02 sec)
Notice again that you’ve got better information to work with in the slow_log table in terms of understanding what’s going on underneath the covers – you can see how many rows were examined, the database being accessed, elapsed time, and more.

来自:http://dev.mysql.com/tech-resources/articles/mysql_51_diagnostic_tools.html

第一种:查Slow query的SQL语法:
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 2 (超过2秒的SQL语法记录起来,设短一点来记录除错也是一种方法.)

第二种:设MySQL Replication用binlog:
log_bin = /var/log/mysql/mysql-bin.log (此档要用mysqlbinlog解来看,
mysqlbinlog mysql-bin.000042| grep “T_ABC” | grep “column value”
)
mysql会将所有INSERT/UPDATE/DELETE语法记于此(但是语法可能跟你想的不同),这是要写给SLAVE用的log 文件

第三种:推荐此方法,将MySQL执行的每行指令全都记录起来:
log = /tmp/mysql.log
restart mysql后, tail -f /tmp/mysql.log就可以看到
来自:http://blog.csdn.net/kenera/article/details/5624981
这里还有一篇:http://blog.csdn.net/mchdba/article/details/38236833

0 0
原创粉丝点击