Mysql的表的碎片清理
来源:互联网 发布:c语言线程同步 编辑:程序博客网 时间:2024/05/17 04:59
最近在生成环境下的MySQL运行下降,有些sql执行也慢,首先检查下慢查询日志是否开启
show variables like ‘slow_query_log%’
在看慢日志设置的时间
show variables like ‘long_query_time%’;
可以看到开启了慢查询,find / -name slow-query.log查找慢日志
发现里面的sql是很慢但是都走了索引但是这些慢的sql都指向一两个表。所以想到可能是每次备份对这几个表的删除操作,但是没有进行碎片整理
进行下碎片整理,按表的引擎来处理
Myisam清理碎片
OPTIMIZE TABLE table_name、
InnoDB碎片清理
看到这段话
if you frequently delete rows (or update rows with variable-length data types), you can end up with a lot of wasted space in your data file(s), similar to filesystem fragmentation.
If you’re not using the innodb_file_per_table option, the only thing you can do about it is export and import the database, a time-and-disk-intensive procedure.
But if you are using innodb_file_per_table, you can identify and reclaim this space!
Prior to 5.1.21, the free space counter is available from the table_comment column of information_schema.tables. Here is some SQL to identify tables with at least 100M (actually 97.65M) of free space:
SELECT table_schema, table_name, table_comment FROM information_schema.tables WHERE engine LIKE ‘InnoDB’ AND table_comment RLIKE ‘InnoDB free: ([0-9]{6,}).*’;
Starting with 5.1.21, this was moved to the data_free column (a much more appropriate place):
SELECT table_schema, table_name, data_free/1024/1024 AS data_free_MB FROM information_schema.tables WHERE engine LIKE ‘InnoDB’ AND data_free > 100*1024*1024;
You can reclaim the lost space by rebuilding the table. The best way to do this is using ‘alter table’ without actually changing anything:
ALTER TABLE foo ENGINE=InnoDB;
This is what MySQL does behind the scenes if you run ‘optimize table’ on an InnoDB table. It will result in a read lock, but not a full table lock. How long it takes is completely dependent on the amount of data in the table (but not the size of the data file). If you have a table with a high volume of deletes or updates, you may want to run this monthly, or even weekly.
所以先进行分析,语句如下
SELECT table_schema, table_name, data_free/1024/1024 AS data_free_MB FROM information_schema.tables WHERE engine LIKE ‘InnoDB’ AND data_free > 100*1024*1024;
返回了两个数据,正是那两张表得data_free大于100M,所以ALTER TABLE tablename ENGINE=InnoDB;
相当于重建表引擎了。再执行速度正常了。
这里要提下如果一个表的表数据大小和索引大小与实际的表数据不符也需要清理下表碎片
在这篇博客中有说http://blog.csdn.net/u011575570/article/details/48092469
结束.欢迎指出不当之处谢谢
参考:http://pengbotao.cn/mysql-suipian-youhua.html
show variables like ‘slow_query_log%’
在看慢日志设置的时间
show variables like ‘long_query_time%’;
可以看到开启了慢查询,find / -name slow-query.log查找慢日志
发现里面的sql是很慢但是都走了索引但是这些慢的sql都指向一两个表。所以想到可能是每次备份对这几个表的删除操作,但是没有进行碎片整理
进行下碎片整理,按表的引擎来处理
Myisam清理碎片
OPTIMIZE TABLE table_name、
InnoDB碎片清理
看到这段话
if you frequently delete rows (or update rows with variable-length data types), you can end up with a lot of wasted space in your data file(s), similar to filesystem fragmentation.
If you’re not using the innodb_file_per_table option, the only thing you can do about it is export and import the database, a time-and-disk-intensive procedure.
But if you are using innodb_file_per_table, you can identify and reclaim this space!
Prior to 5.1.21, the free space counter is available from the table_comment column of information_schema.tables. Here is some SQL to identify tables with at least 100M (actually 97.65M) of free space:
SELECT table_schema, table_name, table_comment FROM information_schema.tables WHERE engine LIKE ‘InnoDB’ AND table_comment RLIKE ‘InnoDB free: ([0-9]{6,}).*’;
Starting with 5.1.21, this was moved to the data_free column (a much more appropriate place):
SELECT table_schema, table_name, data_free/1024/1024 AS data_free_MB FROM information_schema.tables WHERE engine LIKE ‘InnoDB’ AND data_free > 100*1024*1024;
You can reclaim the lost space by rebuilding the table. The best way to do this is using ‘alter table’ without actually changing anything:
ALTER TABLE foo ENGINE=InnoDB;
This is what MySQL does behind the scenes if you run ‘optimize table’ on an InnoDB table. It will result in a read lock, but not a full table lock. How long it takes is completely dependent on the amount of data in the table (but not the size of the data file). If you have a table with a high volume of deletes or updates, you may want to run this monthly, or even weekly.
所以先进行分析,语句如下
SELECT table_schema, table_name, data_free/1024/1024 AS data_free_MB FROM information_schema.tables WHERE engine LIKE ‘InnoDB’ AND data_free > 100*1024*1024;
返回了两个数据,正是那两张表得data_free大于100M,所以ALTER TABLE tablename ENGINE=InnoDB;
相当于重建表引擎了。再执行速度正常了。
这里要提下如果一个表的表数据大小和索引大小与实际的表数据不符也需要清理下表碎片
在这篇博客中有说http://blog.csdn.net/u011575570/article/details/48092469
结束.欢迎指出不当之处谢谢
参考:http://pengbotao.cn/mysql-suipian-youhua.html
0 0
- Mysql的表的碎片清理
- Mysql的表的碎片清理
- Mysql的表的碎片清理
- mysql 表中的碎片的产生原因以及清理
- mysql 碎片清理
- Oracle 表碎片清理
- MySQL表空间的碎片整理
- MySQL的 data_free,表碎片整理
- 查看表的碎片及整理碎片
- 清理mysql的sleep链接
- mysql查询缓存的内存碎片
- 浅析MySQL数据碎片的产生
- 浅析MySQL数据碎片的产生
- 浅析MySQL数据碎片的产生
- 浅析MySQL数据碎片的产生
- 【mysql】清理被锁定的mysql进程
- oracle合并表的碎片
- 分析表碎片的脚本
- 基督教给予我们什么
- (4.5.5.3)Espresso的进阶: ViewAction
- 提高mysql千万级大数据SQL查询优化30条经验
- POJ 1925 Spiderman (DP)
- Macaca 实现 iosMonkey (python 版)
- Mysql的表的碎片清理
- eclipse中svn插件替换用户名密码,解决总是提示输入密码 问题
- OLT的配置过程
- TensorFlow 图像数据预处理及可视化
- 计算中文字符串长度
- Unity下载zip并解压
- 访问用户的通讯录
- maven编译失败了Could not find artifact jdk.tools:jdk.tools:jar:1.6 at specified path C:\Java/../lib/tools
- 如何让你的.vue在sublime text 3 中变成彩色?