mysql-mmm架构深度详解

来源:互联网 发布:数据充值 编辑:程序博客网 时间:2024/06/02 00:25

Mysql-mmm架构深度解读:

 

主机名:            ip:           vip

mysql_monitor     172.18.3.183    

mysql_master01    172.18.3.185        172.18.3.189(write)

mysql_master02    172.18.3.186        172.18.3.190 (read)

mysql_slave01      172.18.3.187                 172.18.3.191 (read)

 

 

(1)在所有主机上配置/etc/hosts中添加:

 

172.18.3.185    mysql_master01

172.18.3.186    mysql_master02

172.18.3.187    mysql_slave01

172.18.3.183    mysql_monitor

 

(2)首先在3台主机上安装mysql和搭建复制(185和186互为主从,187为185的从)

vi/etc/my.cnf 修改两处地方

log-bin=mysql-bin     去掉注释
server-id = 2         每个主机的id不能重复

将186设置为185的从:

 登录数据库,在172.18.3.185上新建授权用户:
MySQL> grant replication slave on *.* to 'epel'@'%' identified by '123456';
MySQL>flush privileges;
MySQL> show master status;(得出一下结果)
+------------------+----------+--------------+------------------+
| File            | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| MySQL-bin.000003 |      374|             |                 |
+------------------+----------+--------------+------------------+
mysql-bin.00003就是我们要开始同步的文件,374就是初始位置。


登录到172.18.3.186,把172.18.3.185设置成自己的主服务器
MySQL>Change master to master_host='172.18.3.185',master_user='replication',master_password='replication',master_log_file='MySQL-bin.000003',master_log_pos=374;(注意Mysql-bin.000003的大小写)
MySQL> start slave;

MySQL> show slave status/G; 查看同步状态(Slave_IO_Running:Yes和Slave_SQL_Running: Yes一定要为YES)

如果做简单的一主一从的话,到这里就可以了,已经完成了;如果要两个服务器互为主从的话,要在172.18.3.185上做相同的设置

如果主数据库有数据的话

数据库锁表操作,不让数据再进行写入动作。mysql> FLUSH TABLES WITH READ LOCK ;

至此,mysql双机互为主从设置完毕。

 (3)配置my.cnf

172.18.3.185上:

 server-id       = 1

log_slave_updates = 1

auto-increment-increment = 2

auto-increment-offset = 1

 

172.18.3.186上:

 server-id       = 2

log_slave_updates = 1

auto-increment-increment = 2

auto-increment-offset = 2


 172.18.3.187上:

 server-id       = 3

log_slave_updates = 1

 

(4)安装mysql-mmm 及相关依赖

 

执行如下脚本install.sh

 #!/bin/bash

yum install gcc -y           #安装gcc

yum install perl* -y         #安装perl依赖包

wget http://xrl.us/cpanm --no-check-certificate  

mv cpanm/usr/bin

chmod 755 /usr/bin/cpanm

cat >/root/list << EOF

install Algorithm::Diff

install Class::Singleton

install DBI

install DBD::mysql

install File::Basename

install File::stat

install File::Temp

install Log::Dispatch

install Log::Log4perl

install Mail::Send

install Net::ARP

install Net::Ping

install Proc::Daemon

install Thread::Queue

install Time::HiRes

EOF

 

for package in `cat /root/list`

do

   cpanm $package

done

#安装mysql-mmm

wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz

mv :mmm2:mysql-mmm-2.2.1.tar.gz mysql-mmm-2.2.1.tar.gz

tar xf mysql-mmm-2.2.1.tar.gz

 cd  mysql-mmm-2.2.1

make install

 

到这里已经完成了MMM的基本需求,接下来需要配置具体的配置文件,其中mmm_common.conf,mmm_agent.conf为agent端的配置文件,mmm_mon.conf为monitor端的配置文件。

 

(5)配置mmm文件

 

[root@mysql_master01 ~]# cat  /etc/mysql-mmm/mmm_common.conf

active_master_role      writer

   

<host default>

        cluster_interface               br0

 

       pid_path                               /var/run/mmm_agentd.pid

        bin_path                               /usr/lib/mysql-mmm/

 

    replication_user        epel

    replication_password    123456

 

        agent_user                              mmm_agent

        agent_password                  mmm_agent

</host>

 

<host mysql_master01>

        ip                                             172.18.3.185

        mode                                    master

        peer                                   mysql_master02

</host>

 

<host mysql_master02>

        ip                                             172.18.3.186

        mode                                    master

        peer                                    mysql_master01

</host>

 

<host mysql_slave01>

        ip                                             172.18.3.187

        mode                                    slave

</host>

 

 

<role writer>

        hosts                                  mysql_master01,mysql_master02

        ips                                            172.18.3.189

        mode                                    exclusive

</role>

 

<role reader>

        hosts                                    mysql_master02,mysql_slave01

        ips                                            172.18.3.190,172.18.3.191

        mode                                    balanced

</role>

[root@mysql_master01 ~]#

 

其中replication_user用于检查复制的用户,agent_user为agent的用户,mode标明是否为主或者备选主,或者从库。mode exclusive主为独占模式,同一时刻只能有一个主,<role write>中hosts表示目前的主库和备选主的真实主机ip或者主机名,ips为对外提供的虚拟机ip地址,<role readr>中hosts代表从库真实的ip和主机名,ips代表从库的虚拟ip地址。

 

由于其他主机也要配置agent配置文件,我们直接把mmm_common.conf从185拷贝到186和187两台主机的/etc/mysql-mmm下。

scp  /etc/mysql-mmm/mmm_common.conf root@172.18.3.186:/etc/mysql-mmm/

scp  /etc/mysql-mmm/mmm_common.confroot@172.18.3.187:/etc/mysql-mmm/

scp  /etc/mysql-mmm/mmm_common.confroot@172.18.3.183:/etc/mysql-mmm/

配置172.18.3.185:

[root@mysql_master01 ~]# cat /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this mysql_master01

 

配置172.18.3.186:

[root@mysql_master02 tmp]# cat /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this mysql_master02

 

配置172.18.3.187:

[root@mysql_slave01 tmp]# cat /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this mysql_slave01

 

配置172.18.3.183:

[root@mysql_monitor mysql-mmm]# cat /etc/mysql-mmm/mmm_mon.conf

include mmm_common.conf

 

<monitor>

        ip                                             127.0.0.1

        pid_path                               /var/run/mmm_mond.pid

        bin_path                                /usr/lib/mysql-mmm/

        status_path                            /var/lib/misc/mmm_mond.status

        ping_ips                               172.18.3.185,172.18.3.186,172.18.3.187

        auto_set_online                         10

</monitor>

 

<host default>

        monitor_user                    mmm_monitor

        monitor_password                mmm_monitor

</host>

 

debug 0

[root@mysql_monitor mysql-mmm]#

 

 

 

(6)创建监控用户,这里需要创建3个监控用户,具体描述如下:

 

 

用户名         描述                                                   权限

monitor user       MMM的monitor端监控所有的mysql数据库的状态用户           REPLICATION CLIENT

agent user              主要是MMM客户端用于改变的master的read_only状态用户      SUPER,REPLICATION CLIENT,PROCESS

repl                    用于复制的用户                                         REPLICATION SLAVE

 

在3台服务器进行授权,因为我之前的主主复制,以及主从已经是ok的,所以我在其中一台服务器执行就ok了。用于复制的账号之前已经有了,所以这里就授权两个账号。

复制代码

 

mysql> GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO'mmm_agent'@'%'   IDENTIFIED BY'mmm_agent';

Query OK, 0 rows affected (0.08 sec)

 

mysql> GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'%'IDENTIFIED BY 'mmm_monitor';

Query OK, 0 rows affected (0.00 sec)

 

mysql> flush privileges;

Query OK, 0 rows affected (0.03 sec)

 

mysql>

 

 

到此安装基本完毕

分别在185 ,186,187 上执行/etc/init.d/mysql-mmm-agent start
开启agent,在183上执行/etc/init.d/mysql-mmm-monitor start
 
如果一切正常,即可在monitor上执行命令查看监控数据
 
检查整体状态:
[root@mysql_monitor monitor]# mmm_control checks all
mysql_slave01   ping         [last change: 2016/10/16 20:31:54]  OK
mysql_slave01   mysql        [last change: 2016/10/17 16:01:01]  OK
mysql_slave01   rep_threads  [last change: 2016/10/17 15:14:49]  OK
mysql_slave01   rep_backlog  [last change: 2016/10/16 20:31:54]  OK: Backlog is null
mysql_master02  ping         [last change: 2016/10/16 20:31:54]  OK
mysql_master02  mysql        [last change: 2016/10/17 16:34:33]  OK
mysql_master02  rep_threads  [last change: 2016/10/17 15:13:25]  OK
mysql_master02  rep_backlog  [last change: 2016/10/16 20:31:54]  OK: Backlog is null
mysql_master01  ping         [last change: 2016/10/17 16:26:56]  OK
mysql_master01  mysql        [last change: 2016/10/17 16:28:20]  OK
mysql_master01  rep_threads  [last change: 2016/10/17 15:16:34]  OK
mysql_master01  rep_backlog  [last change: 2016/10/16 20:31:54]  OK: Backlog is null
 
显示服务器状态:
[root@mysql_monitor monitor]# mmm_control show
  mysql_master01(172.18.3.185) master/ONLINE. Roles: writer(172.18.3.189)
  mysql_master02(172.18.3.186) master/ONLINE. Roles: reader(172.18.3.190)
  mysql_slave01(172.18.3.187) slave/ONLINE. Roles: reader(172.18.3.191)
 
查看详细日志:
[root@mysql_monitor monitor]# tail -f /var/log/mysql-mmm/mmm_mond.log
 
 
 
实现效果:
 
mysql_master01(172.18.3.185) master/ONLINE. Roles:writer(172.18.3.189)
mysql_master02(172.18.3.186) master/ONLINE. Roles:reader(172.18.3.190)
mysql_slave01(172.18.3.187) slave/ONLINE. Roles: reader(172.18.3.191)
 
按照配置完成后,默认状态应该是这样,如果出现如下状态
db2(192.168.0.32) master/AWAITING_RECOVERY. Roles:
可以执行mmm_control set_online db2开启。
 
当我们的master01(185)数据库宕机之后,在monitor上查看日志,
[root@mysql_monitor monitor]# tail -7f /var/log/mysql-mmm/mmm_mond.log 
2016/10/17 17:06:18  WARN Check 'rep_backlog' on 'mysql_master01' is in unknown state! Message: UNKNOWN: Connect error (host = 172.18.3.185:3306, user = mmm_monitor)! Can't connect to MySQL server on '172.18.3.185' (111)
2016/10/17 17:06:20  WARN Check 'rep_threads' on 'mysql_master01' is in unknown state! Message: UNKNOWN: Connect error (host = 172.18.3.185:3306, user = mmm_monitor)! Can't connect to MySQL server on '172.18.3.185' (111)
2016/10/17 17:06:30 ERROR Check 'mysql' on 'mysql_master01' has failed for 10 seconds! Message: ERROR: Connect error (host = 172.18.3.185:3306, user = mmm_monitor)! Can't connect to MySQL server on '172.18.3.185' (111)
2016/10/17 17:06:31 FATAL State of host 'mysql_master01' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2016/10/17 17:06:31  INFO Removing all roles from host 'mysql_master01':
2016/10/17 17:06:31  INFO     Removed role 'writer(172.18.3.189)' from host 'mysql_master01'
2016/10/17 17:06:31  INFO Orphaned role 'writer(172.18.3.189)' has been assigned to 'mysql_master02'
 
[root@mysql_monitor monitor]# mmm_control show
  mysql_master01(172.18.3.185) master/HARD_OFFLINE. Roles: 
  mysql_master02(172.18.3.186) master/ONLINE. Roles: reader(172.18.3.190), writer(172.18.3.189)
  mysql_slave01(172.18.3.187) slave/ONLINE. Roles: reader(172.18.3.191)
 
此时,write vip漂移到master02上,同时slave01也修改为同步master02的数据,当master01故障修好后,开启master01,但是write vip依然不会漂移到master01,而且slave01也是依然slave于master02,因为vip一直在master02上。master01只是可以同步master02的数据,只有当master02宕机后,write vip才会漂移到master01.
 
 
1 0
原创粉丝点击