drbd+MFS+pacemaker+rocosync实现高可用集群架构

来源:互联网 发布:电子地图数据 编辑:程序博客网 时间:2024/06/05 00:42

一、软件简介

DRBD是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。

MooseFS是一个具有容错性的网络分布式文件系统。它把数据分散存放在多个物理服务器上,而呈现给

用户的则是一个统一的资源。

Moosefs:(mfs)存储海量小文件,支持FUSE。(国内企业使用比较多)

Pacemaker是一个集群资源管理器。它利用集群基础构件(corosync)提供的消息和成员管理能力来探

测并从节点或资源级别的故障中恢复,以实现群集服务的最大可用性。

尤为重要的是Pacemaker不是一个heartbeat的分支,似乎很多人存在这样的误解。Pacemaker是CRM项

目(亦名V2资源管理器)的延续,该项目最初是为heartbeat而开发,但目前已经成为独立项目。

Corosync是集群管理套件的一部分,它在传递信息的时候可以通过一个简单的配置文件来定义信息传递

的方式和协议等。

CRM是一个命令行基于群集配置和管理工具。其目标是尽可能地协助基于pacemaker的高可用性集群的

配置和维护。最为重要的是crm提供了交互界面,更加容易排错。

二、需求

1、解决mfs-master单点故障

2、提供图片服务器之类的分布式存储,也可以在其他集群中使用。

3、实现drbd的主备容灾备份。

4、实现心跳检测

5、实现服务(资源)的检测及切换

6、实现高可用集群

三、平台环境

OS:CentOS Linux release 7.3.1611 (Core)

kernel:3.10.0-514.el7.x86_64


网络信息规划如下表:

名称hostnameIPVIP飘移192.168.40.200mfs-master1(drbd)node4192.168.40.131mfs-master2(drbd)node5192.168.40.132mfs-metalog servernode6192.168.40.133mfs-chunk server1node7192.168.40.134mfs-chunk server2node8192.168.40.134clientnode1192.168.40.128


网络拓扑如下:

四、环境准备

1、修改hosts文件保证hosts之间能够互相访问。

[root@node4 ~]# cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.40.131 node4192.168.40.132 node5192.168.40.133 node6192.168.40.134 node7192.168.40.135 node8
2、同步时间

ntpdate cn.pool.ntp.org
3、mfs的各个server之间实现ssh互信(共5台server),命令如下:

ssh-keygenssh-copy-id node#主机名或者IP

五、安装软件

1、安装drbd


在node4和node5先划分出一个分区,而且不格式化那么快。如下:

[root@node4 ~]# fdisk -l /dev/sdbDisk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0xa56b82b0   Device Boot      Start         End      Blocks   Id  System/dev/sdb1            2048    10485759     5241856   83  Linux

[root@node5 ~]# fdisk -l /dev/sdbDisk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x066827f3   Device Boot      Start         End      Blocks   Id  System/dev/sdb1            2048    10485759     5241856   83  Linux
安装derb并安装drbd

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.orgrpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpmyum install -y kmod-drbd84 drbd84-utils

配置drbd全局配置文件

[root@node5 mfs]# cat /etc/drbd.d/global_common.conf # DRBD is the result of over a decade of development by LINBIT.# In case you need professional services for DRBD or have# feature requests visit http://www.linbit.comglobal {usage-count no;# Decide what kind of udev symlinks you want for "implicit" volumes# (those without explicit volume <vnr> {} block, implied vnr=0):# /dev/drbd/by-resource/<resource>/<vnr>   (explicit volumes)# /dev/drbd/by-resource/<resource>         (default for implict)udev-always-use-vnr; # treat implicit the same as explicit volumes# minor-count dialog-refresh disable-ip-verification# cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;}common {protocol C;handlers {# These are EXAMPLE handlers only.# They may have severe implications,# like hard resetting the node under certain circumstances.# Be careful when choosing your poison.pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";# split-brain "/usr/lib/drbd/notify-split-brain.sh root";# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;# quorum-lost "/usr/lib/drbd/notify-quorum-lost.sh root";}startup {# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb}options {# cpu-mask on-no-data-accessible# RECOMMENDED for three or more storage nodes with DRBD 9:# quorum majority;# on-no-quorum suspend-io | io-error;}disk {on-io-error detach;# size on-io-error fencing disk-barrier disk-flushes# disk-drain md-flushes resync-rate resync-after al-extents                # c-plan-ahead c-delay-target c-fill-target c-max-rate                # c-min-rate disk-timeout}net {# protocol timeout max-epoch-size max-buffers# connect-int ping-int sndbuf-size rcvbuf-size ko-count# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri# after-sb-1pri after-sb-2pri always-asbp rr-conflict# ping-timeout data-integrity-alg tcp-cork on-congestion# congestion-fill congestion-extents csums-alg verify-alg# use-rle}syncer {    rate 1024M;}}

/etc/drbd.conf #主配置文件 /etc/drbd.d/global_common.conf #全局配置文件
[root@node4 mfs]# cat /etc/drbd.d/mfs.res resource mfs{protocol C;meta-disk internal;device /dev/drbd1;syncer{verify-alg sha1;}net{allow-two-primaries;}on node4 {disk /dev/sdb1;address 192.168.40.131:7789;}on node5 {disk /dev/sdb1;address 192.168.40.132:7789;}}
把配置文件copy到对面的机器上:

[root@node4 ~]# scp -rp /etc/drbd.d/* node5:/etc/drbd.d/global_common.conf                                               100% 2618     2.6KB/s   00:00    mfs.res                                                          100%  248     0.2KB/s   00:00    
node4上面启动:(注意:我在此处遇到报错,如果你也遇到错误或者其他配置遇错也一样

那就请看文章末尾的报错链接,谢谢。)

[root@node4 ~]# drbdadm create-md mfsinitializing activity loginitializing bitmap (160 KB) to all zeroWriting meta data...New drbd meta data block successfully created.
查看内核是否已经加载了模块:

[root@node4 ~]# modprobe drbd[root@node4 ~]# lsmod | grep drbddrbd                  396875  0 libcrc32c              12644  4 xfs,drbd,nf_nat,nf_conntrack
启动mfs资源:

[root@node4 ~]# drbdadm up mfs[root@node4 ~]# drbdadm --force primary mfs[root@node4 ~]# cat /proc/drbd version: 8.4.10-1 (api:1/proto:86-101)GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s    ns:0 nr:0 dw:0 dr:912 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:5241660
在node5执行
[root@node5 ~]# drbdadm create-md mfsinitializing activity loginitializing bitmap (160 KB) to all zeroWriting meta data...New drbd meta data block successfully created.[root@node5 ~]# modprobe drbd[root@node5 ~]# drbdadm up mfs
然后可以查看数据同步的状态:(需要等待一段时间)

[root@node5 ~]# cat /proc/drbdversion: 8.4.10-1 (api:1/proto:86-101)GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----    ns:0 nr:151552 dw:151552 dr:0 al:8 bm:0 lo:1 pe:3 ua:0 ap:0 ep:1 wo:f oos:5090108[>....................] sync'ed:  3.0% (4968/5116)Mfinish: 0:03:54 speed: 21,648 (21,648) want: 36,640 K/sec

格式化并挂载看看能否正常使用

[root@node4 ~]# mkfs.xfs -f /dev/drbd1meta-data=/dev/drbd1             isize=512    agcount=4, agsize=327604 blks         =                       sectsz=512   attr=2, projid32bit=1         =                       crc=1        finobt=0, sparse=0data     =                       bsize=4096   blocks=1310415, imaxpct=25         =                       sunit=0      swidth=0 blksnaming   =version 2              bsize=4096   ascii-ci=0 ftype=1log      =internal log           bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=0 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node4 ~]# mount /dev/drbd1 /mnt[root@node4 ~]# df -hFilesystem           Size  Used Avail Use% Mounted on/dev/mapper/cl-root   17G  2.1G   15G  13% /devtmpfs             478M     0  478M   0% /devtmpfs                489M     0  489M   0% /dev/shmtmpfs                489M  6.6M  482M   2% /runtmpfs                489M     0  489M   0% /sys/fs/cgroup/dev/sda1           1014M  167M  848M  17% /boottmpfs                 98M     0   98M   0% /run/user/0/dev/drbd1           5.0G   33M  5.0G   1% /mnt
这就证明drbd可以正常使用了,取消挂载

[root@node4 ~]# umount /mnt

2、编译安装MFS

这次才使用的是moosefs,目前是国内的主流。

这次测试的版本moosefs-3.0.96。

安装依赖包

yum install zlib-devel gcc -y

上传moosefs-3.0.96,也就是v3.0.96.tar.gz,或者去GitHub下载,

wgethttps://github.com/moosefs/moosefs/archive/v3.0.96.tar.gz

后续将用到crmsh

[root@node4 src]# lscrmsh-2.3.2.tar  v3.0.96.tar.gz[root@node4 src]# scp v3.0.96.tar.gz node5:/usr/local/src/v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    [root@node4 src]# scp v3.0.96.tar.gz node6:/usr/local/src/v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    [root@node4 src]# scp v3.0.96.tar.gz node7:/usr/local/src/v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    [root@node4 src]# scp v3.0.96.tar.gz node8:/usr/local/src/v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    [root@node4 src]# scp v3.0.96.tar.gz node1:/usr/local/src/
创建mfs用户,切记各个mfs-server的mfs的UID必须一致。否者会失败

[root@node4 src]# useradd -u 1000 mfs[root@node5 src]# useradd -u 1000 mfs[root@node6 src]# useradd -u 1000 mfs[root@node7 src]# useradd -u 1000 mfs[root@node8 src]# useradd -u 1000 mfs
挂载/dev/drbd1到/usr/local/mfs(只需要在一台mfsmaster上安装即可

[root@node4 src]# drbd-overviewNOTE: drbd-overview will be deprecated soon.Please consider using drbdtop. 1:mfs/0  Connected Primary/Secondary UpToDate/UpToDate [root@node4 src]# mkdir /usr/local/mfs[root@node4 src]# chown -R mfs:mfs /usr/local/mfs[root@node4 src]# mount /dev/drbd1 /usr/local/mfs
开始编译moosefs-3.0.96

[root@node4 src]# tar -xf v3.0.96.tar.gz [root@node4 src]# cd /usr/local/src/moosefs-3.0.96/[root@node4 moosefs-3.0.96]#  ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount
由于node4是mfs-master,所以不需要装载--disable-mfschunkserver、--disable-mfsmount
[root@node4 moosefs-3.0.96]# make && make install
配置master:

[root@node4 mfs]# pwd/usr/local/mfs/etc/mfs[root@node4 mfs]# lsmfsexports.cfg.sample  mfsmaster.cfg.sample  mfsmetalogger.cfg.sample  mfstopology.cfg.sample[root@node4 mfs]# cp mfsexports.cfg.sample mfsexports.cfg[root@node4 mfs]# cp mfsmaster.cfg.sample mfsmaster.cfg
对于mfsmaster.cfg文件, 因为是官方的,默认配置,我们投入即可使用。

需要修改的文件是mfsexports.cfg添加挂载目录的权限及密码(只要在最后添加)

vim mfsexports.cfg*            /        rw,alldirs,mapall=mfs:mfs,password=aizhen*            .          rw 
开启元数据文件默认是empty文件,需要我们手工打开:

[root@node4 mfs]# cp /usr/local/mfs/var/mfs/metadata.mfs.empty /usr/local/mfs/var/mfs/metadata.mfs
启动master:

[root@node4 mfs]# /usr/local/mfs/sbin/mfsmaster startopen files limit has been set to: 16384working directory: /usr/local/mfs/var/mfslockfile created and lockedinitializing mfsmaster modules ...exports file has been loadedmfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaultsloading metadata ...metadata file has been loadedno charts data file - initializing empty chartsmaster <-> metaloggers module: listen on *:9419master <-> chunkservers module: listen on *:9420main master server module: listen on *:9421mfsmaster daemon initialized properly
现在编写mfsmaster的启动脚本,才用systemctl启动。

[root@node4 mfs]# cat /usr/lib/systemd/system/mfsmaster.service [Unit]Description=mfsAfter=network.target  [Service]Type=forkingExecStart=/usr/local/mfs/sbin/mfsmaster startExecStop=/usr/local/mfs/sbin/mfsmaster stopPrivateTmp=true  [Install]WantedBy=multi-user.target[root@node4 mfs]# chmod 775 /usr/lib/systemd/system/mfsmaster.service 

测试mfsmaster.service脚本能否正常使用

[root@node4 mfs]# systemctl start mfsmaster.service[root@node4 mfs]# systemctl status mfsmaster.service● mfsmaster.service - mfs   Loaded: loaded (/usr/lib/systemd/system/mfsmaster.service; enabled; vendor preset: disabled)   Active: active (running) since Sat 2017-10-28 13:04:40 EDT; 7s ago  Process: 7555 ExecStart=/usr/local/mfs/sbin/mfsmaster start (code=exited, status=0/SUCCESS) Main PID: 7557 (mfsmaster)   CGroup: /system.slice/mfsmaster.service           └─7557 /usr/local/mfs/sbin/mfsmaster start

[root@node4 mfs]# netstat -ntlpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    tcp        0      0 0.0.0.0:9419            0.0.0.0:*               LISTEN      7557/mfsmaster      tcp        0      0 0.0.0.0:9420            0.0.0.0:*               LISTEN      7557/mfsmaster      tcp        0      0 0.0.0.0:9421            0.0.0.0:*               LISTEN      7557/mfsmaster     

设置mfsmaster开机自启动:

[root@node4 mfs]# systemctl enable mfsmasterCreated symlink from /etc/systemd/system/multi-user.target.wants/mfsmaster.service to /usr/lib/systemd/system/mfsmaster.service.
将mfsmaster.service复制到node5

[root@node4 mfs]# scp /usr/lib/systemd/system/mfsmaster.service node5:/usr/lib/systemd/system/mfsmaster.service                                                100%  217     0.2KB/s   00:00    
在node5也设置mfsmaster开机自启动;

[root@node5 ~]# systemctl enable mfsmasterCreated symlink from /etc/systemd/system/multi-user.target.wants/mfsmaster.service to /usr/lib/systemd/system/mfsmaster.service.
在node5创建必须的目录:

[root@node5 ~]# mkdir /usr/local/mfs[root@node5 ~]# chown -R mfs:mfs /usr/local/mfs

现在开始测试drbd能否主从正常切换,并且实现mfsmaster的切换(切换到node5)

在node4上

[root@node4 mfs]# systemctl stop mfsmaster[root@node4 mfs]# cd [root@node4 ~]# umount /usr/local/mfs[root@node4 ~]# drbdadm secondary mfs
在node5上

[root@node5 ~]# mkdir /usr/local/mfs[root@node5 ~]# chown -R mfs:mfs /usr/local/mfs[root@node5 ~]# drbdadm primary mfs[root@node5 ~]# mount /dev/drbd1 /usr/local/mfs
[root@node5 ~]# systemctl start mfsmaster[root@node5 ~]# systemctl status mfsmaster● mfsmaster.service - mfs   Loaded: loaded (/usr/lib/systemd/system/mfsmaster.service; enabled; vendor preset: disabled)   Active: active (running) since Sat 2017-10-28 13:12:01 EDT; 13s ago  Process: 1646 ExecStart=/usr/local/mfs/sbin/mfsmaster start (code=exited, status=0/SUCCESS) Main PID: 1648 (mfsmaster)   CGroup: /system.slice/mfsmaster.service           └─1648 /usr/local/mfs/sbin/mfsmaster start
[root@node5 ~]# netstat -ntlpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    tcp        0      0 0.0.0.0:9419            0.0.0.0:*               LISTEN      1648/mfsmaster      tcp        0      0 0.0.0.0:9420            0.0.0.0:*               LISTEN      1648/mfsmaster     
以上说明测试没问题!!!


在node6安装metalogger server

啰嗦一下:Metalogger Server 是 Master Server 的备份服务器。因此,Metalogger Server 的安装步骤和 Master Server 的安装步骤相同。并且,最好使用和 Master Server 配置一样的服务器来做 Metalogger Server。这样,一旦主服务器master宕机失效,我们只要导入备份信息changelogs到元数据文件,备份服务器可直接接替故障的master继续提供服务。可以根据成本来决定是否需要添加这台服务器。

编译安装metalogger server

[root@node6 src]# tar -xf v3.0.96.tar.gz [root@node6 src]# cd moosefs-3.0.96/[root@node6 moosefs-3.0.96]#  ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs  --disable-mfschunkserver --disable-mfsmount

[root@node6 moosefs-3.0.96]# make && make install

配置metalogger server

[root@node6 moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs/[root@node6 mfs]# lsmfsexports.cfg.sample  mfsmaster.cfg.sample  mfsmetalogger.cfg.sample  mfstopology.cfg.sample[root@node6 mfs]# cp mfsmetalogger.cfg.sample mfsmetalogger.cfg[root@node6 mfs]# vim mfsmetalogger.cfg
MASTER_HOST = 192.168.40.200
将其指向VIP或者先指向某台mfsmaster做测试

启动metalogger server:(启动没问题就写启动脚本,并设置开机自启动)

[root@node6 ~]# /usr/local/mfs/sbin/mfsmetalogger startopen files limit has been set to: 4096working directory: /usr/local/mfs/var/mfslockfile created and lockedinitializing mfsmetalogger modules ...mfsmetalogger daemon initialized properly[root@node6 ~]# mv /usr/lib/systemd/system/mfsmaster.service /usr/lib/systemd/system/mfsmetalog.service 

[root@node6 ~]# cat /usr/lib/systemd/system/mfsmetalog.service [Unit]Description=mfsAfter=network.target  [Service]Type=forkingExecStart=/usr/local/mfs/sbin/mfsmetalogger startExecStop=/usr/local/mfs/sbin/mfsmetalogger stopPrivateTmp=true  [Install]WantedBy=multi-user.target
[root@node6 ~]# /usr/local/mfs/sbin/mfsmetalogger stopsending SIGTERM to lock owner (pid:2093)waiting for termination terminated[root@node6 ~]# systemctl start mfsmetalog[root@node6 ~]# systemctl status mfsmetalog● mfsmetalog.service - mfs   Loaded: loaded (/usr/lib/systemd/system/mfsmetalog.service; disabled; vendor preset: disabled)   Active: active (running) since Sat 2017-10-28 20:26:28 EDT; 12s ago  Process: 2113 ExecStart=/usr/local/mfs/sbin/mfsmetalogger start (code=exited, status=0/SUCCESS) Main PID: 2115 (mfsmetalogger)   CGroup: /system.slice/mfsmetalog.service           └─2115 /usr/local/mfs/sbin/mfsmetalogger start
[root@node6 ~]# systemctl enable mfsmetalogCreated symlink from /etc/systemd/system/multi-user.target.wants/mfsmetalog.service to /usr/lib/systemd/system/mfsmetalog.service.
在node7、node8编译安装chunk server。

node7和node8配置一样,以node7为例:

[root@node7 src]# tar -xf v3.0.96.tar.gz [root@node7 src]# cd moosefs-3.0.96/[root@node7 moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs  --disable-mfsmaster --disable-mfsmount
[root@node7 moosefs-3.0.96]# make && make install 
配置mfschunkserver.cfg的文件
[root@node7 moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs/[root@node7 mfs]# lsmfschunkserver.cfg.sample  mfshdd.cfg.sample  mfsmetalogger.cfg.sample[root@node7 mfs]# cp mfschunkserver.cfg.sample mfschunkserver.cfg[root@node7 mfs]# cp mfshdd.cfg.sample mfshdd.cfg[root@node7 mfs]# vim mfschunkserver.cfg
MASTER_HOST = 192.168.40.200
将其指向VIP或者先指向某台mfsmaster做测试
配置mfshdd.cfg的文件

mfshdd.cfg该文件用来设置你将 Chunk Server 的哪个目录共享出去给 Master Server进行管理。当然,虽然这里填写的是共享的目录,但是这个目录后面最好是一个单独的分区。

[root@node7 mfs]# mkdir /mfsdata[root@node7 mfs]# chown -R mfs:mfs /mfsdata[root@node7 mfs]# vim mfshdd.cfg
直接在最后一行加

/mfsdata
启动chunkserver:(启动没问题就写启动脚本,并设置开机自启动)
[root@node7 mfs]# /usr/local/mfs/sbin/mfschunkserver start
[root@node7 mfs]# /usr/local/mfs/sbin/mfschunkserver stop
[root@node7 mfs]# cat /usr/lib/systemd/system/mfschunk.service[Unit]Description=mfsAfter=network.target  [Service]Type=forkingExecStart=/usr/local/mfs/sbin/mfschunkserver startExecStop=/usr/local/mfs/sbin/mfschunkserver stopPrivateTmp=true  [Install]WantedBy=multi-user.target
[root@node7 mfs]# systemctl enable mfschunk
MFS的服务已经配置完成。

先停止node4、node5的服务

[root@node4 ~]# systemctl stop mfsmaster[root@node4 ~]# systemctl stop drbd

[root@node5 ~]# systemctl stop mfsmaster[root@node5 ~]# systemctl stop drbd
3、安装corosync和pacemaker

在node4、node5解决依赖问题

[root@node4 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python
生命周期管理工具:
Pcs:agent(pcsd)

Crash:pssh

开启pcs并设置开机自启动:

[root@node4 ~]# systemctl start pcsd[root@node4 ~]# systemctl enable pcsd
修改hacluster用户的密码:

[root@node4 ~]# echo 000000 | passwd --stdin hacluster
[root@node5 ~]# echo 000000 | passwd --stdin hacluster
注册pcs集群主机(默认用户是hacluster和密码){只要在一台node执行}
[root@node4 ~]# pcs cluster auth node4 node5Username: haclusterPassword: node5: Authorizednode4: Authorized
在集群上注册两台集群:

[root@node4 ~]# pcs cluster setup --name mysqlcluster node4 node5 --force
Destroying cluster on nodes: node4, node5...node5: Stopping Cluster (pacemaker)...node4: Stopping Cluster (pacemaker)...node5: Successfully destroyed clusternode4: Successfully destroyed clusterSending 'pacemaker_remote authkey' to 'node4', 'node5'node5: successful distribution of the file 'pacemaker_remote authkey'node4: successful distribution of the file 'pacemaker_remote authkey'Sending cluster config files to the nodes...node4: Succeedednode5: SucceededSynchronizing pcsd certificates on nodes node4, node5...node5: Successnode4: SuccessRestarting pcsd on the nodes in order to reload the certificates...node5: Successnode4: Success
我们看到生成来corosync.conf配置文件:

[root@node4 ~]# ll /etc/corosync/total 16-rw-r--r-- 1 root root  385 Oct 28 21:37 corosync.conf-rw-r--r-- 1 root root 2881 Sep  6 12:53 corosync.conf.example-rw-r--r-- 1 root root  767 Sep  6 12:53 corosync.conf.example.udpu-rw-r--r-- 1 root root 3278 Sep  6 12:53 corosync.xml.exampledrwxr-xr-x 2 root root    6 Sep  6 12:53 uidgid.d
启动集群:

[root@node4 ~]# pcs cluster start --allnode4: Starting Cluster...node5: Starting Cluster...
##相当于启动来pacemaker和corosync
在node4、node5将pacemaker和corosync设置为开机自启动:

[root@node4 ~]# systemctl enable pacemaker[root@node4 ~]# systemctl enable corosync
##因为我们没有配置STONITH设备,所以我们下面要关闭:

[root@node4 ~]# pcs property set stonith-enabled=false
集群我们可以下载安装crmsh来操作(从github来下载,然后解压直接安装):只在一个节点安装即可。

主要是crmsh使用交互界面,方便排错。

编译安装crmsh-2.3.2:

[root@node4 src]# tar -xf crmsh-2.3.2.tar [root@node4 src]# cd crmsh-2.3.2[root@node4 crmsh-2.3.2]# python setup.py install
哈哈,就是喜欢这么和谐的界面:

[root@node4 ~]# crm crm(live)# statusStack: corosyncCurrent DC: node4 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorumLast updated: Sat Oct 28 21:50:11 2017Last change: Sat Oct 28 21:45:22 2017 by root via cibadmin on node52 nodes configured0 resources configuredOnline: [ node4 node5 ]No resourcescrm(live)# 
下面开始配置资源:

crm(live)configure# primitive mfs_drbd ocf:linbit:drbd params drbd_resource=mfs op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100crm(live)configure# verifycrm(live)configure# 
crm(live)configure# ms ms_mfs_drbd mfs_drbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"crm(live)configure# verifycrm(live)configure# 

确定verify没问题即可(若有问题可以直接使用edit修改文件)

crm(live)configure# commit
配置挂载资源:

crm(live)configure# primitive mfsstore ocf:heartbeat:Filesystem params device=/dev/drbd1 directory=/usr/local/mfs fstype=xfs op start timeout=60 op stop timeout=60crm(live)configure# verifycrm(live)configure# colocation ms_mfs_drbd_with_mfsstore inf: mfsstore ms_mfs_drbd
crm(live)configure# order ms_mfs_drbd_before_mystore Mandatory:  ms_mfs_drbd:promote mfsstore:start
crm(live)configure# verify crm(live)configure# commit
colocation是绑定亲缘关系,order是决定那个服务先启动(很重要)
配置mfs资源:
crm(live)configure# primitive mfs systemd:mfsmaster op monitor timeout=100 interval=30 op start timeout=100 interval=0 op stop timeout=100 interval=0crm(live)configure# verifycrm(live)configure# crm(live)configure# colocation mfs_with_mystore inf: mfs mfsstorecrm(live)configure# order mfsstore_befor_mfs Mandatory: mfsstore mfscrm(live)configure# verifycrm(live)configure# commitcrm(live)configure# 
配置VIP资源:

crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=192.168.40.200crm(live)configure# colocation vip_with_msf inf: vip mfscrm(live)configure# verifycrm(live)configure# commit
使用show查看配置的内容:

crm(live)configure# shownode 1: node4node 2: node5primitive mfs systemd:mfsmaster \        op monitor timeout=100 interval=30 \        op start timeout=100 interval=0 \        op stop timeout=100 interval=0primitive mfs_drbd ocf:linbit:drbd \        params drbd_resource=mfs \        op monitor role=Master interval=10 timeout=20 \        op monitor role=Slave interval=20 timeout=20 \        op start timeout=240 interval=0 \        op stop timeout=100 interval=0primitive mfsstore Filesystem \        params device="/dev/drbd1" directory="/usr/local/mfs" fstype=xfs \        op start timeout=60 interval=0 \        op stop timeout=60 interval=0primitive vip IPaddr \        params ip=192.168.40.200ms ms_mfs_drbd mfs_drbd \        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=truecolocation mfs_with_mystore inf: mfs mfsstoreorder mfsstore_befor_mfs Mandatory: mfsstore mfsorder ms_mfs_drbd_before_mystore Mandatory: ms_mfs_drbd:promote mfsstore:startcolocation ms_mfs_drbd_with_mfsstore inf: mfsstore ms_mfs_drbdcolocation vip_with_msf inf: vip mfsproperty cib-bootstrap-options: \        have-watchdog=false \        dc-version=1.1.16-12.el7_4.4-94ff4df \        cluster-infrastructure=corosync \        cluster-name=mfscluster \        stonith-enabled=false
所有服务端的所有配置已经配完!!!

查看各项资源已经正常使用

crm(live)# statusStack: corosyncCurrent DC: node4 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorumLast updated: Sat Oct 28 22:21:47 2017Last change: Sat Oct 28 22:17:18 2017 by root via cibadmin on node42 nodes configured5 resources configuredOnline: [ node4 node5 ]Full list of resources: Master/Slave Set: ms_mfs_drbd [mfs_drbd]     Masters: [ node5 ]     Slaves: [ node4 ] mfsstore(ocf::heartbeat:Filesystem):Started node5 mfs(systemd:mfsmaster):Started node5 vip(ocf::heartbeat:IPaddr):Started node5

mfs服务器已经做好提供挂载服务。


下面开始配置客户端(也就是需要挂载分布式存储的后端服务器,例如:图片服务器之类等)

在node1解决依赖问题。(等待时间有点久)

[root@node1 etc]# yum install fuse fuse-devel zlib-devel gcc -y
编译安装moosefs-3.0.96
[root@node1 src]# tar -xf v3.0.96.tar.gz [root@node1 src]# cd moosefs-3.0.96/[root@node1 moosefs-3.0.96]#  ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver --enable-mfsmount
[root@node1 moosefs-3.0.96]# make && make install 
[root@node1 moosefs-3.0.96]# mkdir /mfsdata[root@node1 moosefs-3.0.96]# chown -R mfs:mfs /mfsdata
测试挂载:

[root@node1 moosefs-3.0.96]# /usr/local/mfs/bin/mfsmount /mfsdata -H 192.168.40.200 -pMFS Password:mfsmaster accepted connection with parameters: read-write,restricted_ip,map_all ; root mapped to nginx:nginx ; users mapped to nginx:nginx[root@node1 moosefs-3.0.96]# df -hFilesystem           Size  Used Avail Use% Mounted on/dev/mapper/cl-root   17G  9.0G  8.1G  53% /devtmpfs             478M     0  478M   0% /devtmpfs                489M     0  489M   0% /dev/shmtmpfs                489M  6.6M  482M   2% /runtmpfs                489M     0  489M   0% /sys/fs/cgroup/dev/sda1           1014M  138M  877M  14% /boottmpfs                 98M     0   98M   0% /run/user/0192.168.40.200:9421   34G  4.5G   30G  14% /mfsdata
密码是配置mfsmaster服务器的mfsexport.cfg配置的密码(我配置的是aizhen)

测试是否能正常写入数据:

[root@node1 moosefs-3.0.96]# cd /mfsdata/[root@node1 mfsdata]# ls[root@node1 mfsdata]# touch a.txt[root@node1 mfsdata]# echo 123 >> a.txt[root@node1 mfsdata]# cat a.txt 123
至此,所有的配置已经完成。

六、注意事项及报错。

本次测试的所有注意事项及错误都在我的另外一篇博文,

主要是为了方便自己的排错。请谅解,

错误链接:点击打开链接

本文为博主原创,转载请注明本文的出处,谢谢

阅读全文
0 0
原创粉丝点击