MooseFS高可用部署

来源:互联网 发布:软件大全官方网站 编辑:程序博客网 时间:2024/05/18 05:55

元数据服务器 Master server 安装

需要下载的源码包及依赖包:
moosefs-3.0.97.tar.gz
libpcap-1.4.0-1.el6.rft.x86_64.rpm
libpcap-devel-1.4.0-1.el6.rft.x86_64.rpm

yum install -y gcc make rpm-build fuse-devel zlib-develyum install -y libpcap-1.4.0-1.el6.rft.x86_64.rpm libpcap-devel-1.4.0-1.el6.rft.x86_64.rpmrpmbuild -tb moosefs-3.0.97.tar.gz   #将源码包编译成rpm包cd /root/rpmbuild/RPMS/x86_64   #便以后rpm包存放地址rpm -ivh moosefs-cgi-3.0.97-1.x86_64.rpm moosefs-cgiserv-3.0.97-1.x86_64.rpm moosefs-cli-3.0.97-1.x86_64.rpm moosefs-master-3.0.97-1.x86_64.rpm
vim /etc/hosts    #修改/etc/hosts地址解析文件172.25.21.1 mfsmaster server1
mfsmaster start   #启动master servermfscgiserv   #启动CGI监控服务
netstat -anlpt    #可以看到9419,9420,9421,9425 端口Active Internet connections (servers and established)Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   tcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      1186/mfsmaster      tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      1186/mfsmaster      tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      1186/mfsmaster      tcp        0      0 0.0.0.0:9425                0.0.0.0:*                   LISTEN      1189/python         tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      921/sshd            tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      999/master          tcp        0      0 172.25.21.1:22              172.25.21.250:33250         ESTABLISHED 1051/sshd           tcp        0      0 :::22                       :::*                        LISTEN      921/sshd            tcp        0      0 ::1:25                      :::*                        LISTEN      999/master          

在浏览器地址栏输入http://172.25.21.1:9425即可查看 master的运行情况
这里写图片描述

同样的server2也可以访问它的9425端口

存储服务器 Chunk servers 安装

yum install -y moosefs-chunkserver-3.0.97-1.x86_64.rpm
vim /etc/hosts    #修改/etc/hosts地址解析文件172.25.21.1 mfsmaster server1
mkdir /mnt/chunk1chown mfs.mfs /mnt/chunk1/
vim /etc/mfs/mfshdd.cfg/mnt/chunk1

每个Chunk servers 端创建的目录不同,要做相应修改

mfschunkserver start

现在再通过浏览器访问 http://172.25.21.1:9425/ 可以看见这个 MFS系统的信息
这里写图片描述

客户端 client安装

yum install -y moosefs-client-3.0.97-1.x86_64.rpmmkdir /mnt/mfs   #默认挂载点mfsmount /mnt/mfs/

查看

df -h

这里写图片描述
每台主机都为20G,可以看到两台Chunk servers的存储都挂载了过来

MFS高可用部署

安装Keepalived

yum install -y openssl-devel popt-devel libnl-devel libnfnetlink-develtar zxf keepalived-1.3.5.tar.gzcd keepalived-1.3.5./configure --prefix=/usr/local/keepalived  --with-init=SYSVmakemake install
cp keepalived/etc/init.d/keepalived /etc/init.d/cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/cp /root/keepalived-1.3.5/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
chmod +x /etc/init.d/keepalived      #添加执行权限chkconfig keepalived on              #设置开机启动

keepalived服务开关命令

service keepalived start                   #启动service keepalived stop                    #关闭service keepalived restart                 #重启
vim /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {  notification_email {    root@localhost    }notification_email_from keepalived@localhostsmtp_server 127.0.0.1smtp_connect_timeout 30router_id MFS_HA_MASTER}vrrp_script chk_mfs {                            script "/usr/local/mfs/keepalived_check_mfsmaster.sh"  interval 2  weight 2}vrrp_instance VI_1 {  state MASTER  interface eth0  virtual_router_id 21  priority 100  advert_int 1  authentication {    auth_type PASS    auth_pass 1111}  track_script {    chk_mfs}virtual_ipaddress {    172.25.21.100}notify_master "/etc/keepalived/clean_arp.sh 172.25.21.100"}

编写监控脚本

vim /usr/local/mfs/keepalived_check_mfsmaster.sh#!/bin/bashA=`ps -C mfsmaster --no-header | wc -l`if [ $A -eq 0 ];then/etc/init.d/mfsmaster startsleep 3   if [ `ps -C mfsmaster --no-header | wc -l ` -eq 0 ];then      /usr/bin/killall -9 mfscgiserv      /usr/bin/killall -9 keepalived   fifi
chmod 755 /usr/local/mfs/keepalived_check_mfsmaster.sh

设置更新VIP地址的arp记录到网关脚本

vim /etc/keepalived/clean_arp.sh#!/bin/shVIP=$1GATEWAY=172.25.21.250                     /sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null
chmod 755 /etc/keepalived/clean_arp.sh
/etc/init.d/keepalived start    #启动keepalived

查看vip

ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 52:54:00:43:ff:9d brd ff:ff:ff:ff:ff:ff    inet 172.25.21.1/24 brd 172.25.21.255 scope global eth0    inet 172.25.21.100/32 scope global eth0    inet6 fe80::5054:ff:fe43:ff9d/64 scope link        valid_lft forever preferred_lft forever

Keepalived_BACKUP(mfs 备用master)端的配置

安装与上面相同
配置文件有修改

vim /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {  notification_email {    root@localhost    }notification_email_from keepalived@localhostsmtp_server 127.0.0.1smtp_connect_timeout 30router_id MFS_HA_BACKUP}vrrp_script chk_mfs {                            script "/usr/local/mfs/keepalived_check_mfsmaster.sh"  interval 2  weight 2}vrrp_instance VI_1 {  state BACKUP  interface eth0  virtual_router_id 21  priority 50  advert_int 1  authentication {    auth_type PASS    auth_pass 1111}  track_script {    chk_mfs}virtual_ipaddress {    172.25.21.100}notify_master "/etc/keepalived/clean_arp.sh 172.25.21.100"}

编写监控脚本

vim /usr/local/mfs/keepalived_notify.sh#!/bin/bashA=`ps -C mfsmaster --no-header | wc -l`if [ $A -eq 0 ];then/etc/init.d/mfsmaster startsleep 3   if [ `ps -C mfsmaster --no-header | wc -l ` -eq 0 ];then      /usr/bin/killall -9 mfscgiserv      /usr/bin/killall -9 keepalived   fifi
chmod 755 /usr/local/mfs/keepalived_notify.sh

设置更新VIP地址的arp记录到网关脚本

vim /etc/keepalived/clean_arp.sh#!/bin/shVIP=$1GATEWAY=172.25.21.250                                    /sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null
chmod 755 /etc/keepalived/clean_arp.sh
/etc/init.d/keepalived start    #启动keepalived
ps -ef|grep keepalivedroot     14304     1  0 23:23 ?        00:00:00 keepalived -Droot     14306 14304  0 23:23 ?        00:00:00 keepalived -Droot     14307 14304  0 23:23 ?        00:00:00 keepalived -Droot     14312  1363  0 23:23 pts/0    00:00:00 grep keepalived

chunkServer的配置

mfschunkserver.cfg文件中的MASTER_HOST参数配置成VIP,即172.25.21.100

vim /etc/mfs/mfschunkserver.cfgMASTER_HOST = 172.25.21.100
mfschunkserver restart

clinet客户端的配置

mkdir /mnt/mfs
mfsmount /mnt/mfs -H 172.25.21.100

这里写图片描述

测试向客户端挂载MFS文件系统后的数据读写是否正常

[root@server5 mfs]# cd /mnt/mfs[root@server5 mfs]# cp /etc/passwd /mnt/mfs[root@server5 mfs]# cat passwd root:x:0:0:root:/root:/bin/bashbin:x:1:1:bin:/bin:/sbin/nologindaemon:x:2:2:daemon:/sbin:/sbin/nologinadm:x:3:4:adm:/var/adm:/sbin/nologinlp:x:4:7:lp:/var/spool/lpd:/sbin/nologinsync:x:5:0:sync:/sbin:/bin/syncshutdown:x:6:0:shutdown:/sbin:/sbin/shutdownhalt:x:7:0:halt:/sbin:/sbin/haltmail:x:8:12:mail:/var/spool/mail:/sbin/nologinuucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologinoperator:x:11:0:operator:/root:/sbin/nologingames:x:12:100:games:/usr/games:/sbin/nologingopher:x:13:30:gopher:/var/gopher:/sbin/nologinftp:x:14:50:FTP User:/var/ftp:/sbin/nologinnobody:x:99:99:Nobody:/:/sbin/nologinvcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologinsaslauth:x:499:76:"Saslauthd user":/var/empty/saslauth:/sbin/nologinpostfix:x:89:89::/var/spool/postfix:/sbin/nologinsshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin[root@server5 mfs]# rm -rf passwd 

说明挂载后的MFS数据共享正常

故障切换后的数据同步脚本

上面的配置实现Keepalived_MASTER机器出现故障(keepalived服务关闭),VIP资源转移到Keepalived_BACKUP上;当Keepalived_MASTER机器故障恢复(即keepalived服务开启),那么它就会将VIP资源再次抢夺回来。但是只是实现了VIP资源的转移,MFS文件系统的数据却没有进行同步。
下面在两台机器上分别写了数据同步脚本(提前做好双方ssh免密)
Keepalived_MASTER端

vim /usr/local/mfs/MFS_DATA_Sync.sh#!/bin/bashA=`ip addr|grep 172.25.21.100|awk -F" " '{print $2}'|cut -d"/" -f1`if [ $A == 172.25.21.100 ];then   /etc/init.d/mfsmaster stop   /bin/rm -f /usr/local/mfs/var/mfs/*   /usr/bin/rsync -e "ssh -p22" -avpgolr 172.25.21.2:/usr/local/mfs/var/mfs/* /usr/local/mfs/var/mfs/   /usr/local/mfs/sbin/mfsmetarestore -m   /etc/init.d/mfsmaster -a   sleep 3   echo "this server has become the master of MFS"   if [ $A != 172.25.21.100 ];then   echo "this server is still MFS's slave"   fifi

Keepalived_BACKUP端

vim /usr/local/mfs/MFS_DATA_Sync.sh#!/bin/bashA=`ip addr|grep 172.25.21.100|awk -F" " '{print $2}'|cut -d"/" -f1`if [ $A == 172.25.21.100 ];then   /etc/init.d/mfsmaster stop   /bin/rm -f /usr/local/mfs/var/mfs/*   /usr/bin/rsync -e "ssh -p22" -avpgolr 172.25.21.2:/usr/local/mfs/var/mfs/* /usr/local/mfs/var/mfs/   /usr/local/mfs/sbin/mfsmetarestore -m   /etc/init.d/mfsmaster -a   sleep 3   echo "this server has become the master of MFS"   if [ $A != 172.25.21.100 ];then   echo "this server is still MFS's slave"   fifi

当VIP资源转移到自己这一方时,执行这个同步脚本,就会将对方的数据同步过来了。

测试

clinet客户端

mfsmount /mnt/mfs -H 172.25.21.100cd /mnt/mfscp /etc/passwd .

Keepalived_MASTER端

mfsmaster stop
ps -ef|grep mfsroot      1189     1  0 Oct21 ?        00:00:00 /usr/bin/python /usr/sbin/mfscgiservroot     15643  1189  0 Oct21 ?        00:00:00 [mfs.cgi] <defunct>root     15644  1189  0 Oct21 ?        00:00:00 [mfs.cgi] <defunct>root     15675  1053  0 00:23 pts/0    00:00:00 grep mfs
ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 52:54:00:43:ff:9d brd ff:ff:ff:ff:ff:ff    inet 172.25.21.1/24 brd 172.25.21.255 scope global eth0    inet 172.25.21.100/32 scope global eth0    inet6 fe80::5054:ff:fe43:ff9d/64 scope link        valid_lft forever preferred_lft forever

发现VIP还时在这边

/etc/init.d/keepalived stop   #关闭keepalived
ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 52:54:00:43:ff:9d brd ff:ff:ff:ff:ff:ff    inet 172.25.21.1/24 brd 172.25.21.255 scope global eth0    inet6 fe80::5054:ff:fe43:ff9d/64 scope link        valid_lft forever preferred_lft forever

VIP已经不在这边

Keepalived_BACKUP端

ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 52:54:00:3f:d7:d3 brd ff:ff:ff:ff:ff:ff    inet 172.25.21.2/24 brd 172.25.21.255 scope global eth0    inet 172.25.21.100/32 scope global eth0    inet6 fe80::5054:ff:fe3f:d7d3/64 scope link        valid_lft forever preferred_lft forever

发现VIP已经转移过来了
运行同步脚本
sh -x /usr/local/mfs/MFS_DATA_Sync.sh

clinet客户端

cd /mnt/mfsls    #查看之前的passwd是否还在passwd

这样MFS的高可用部署就搭建完成

原创粉丝点击