[linux] two-node-ha-centos
来源:互联网 发布:数据脱敏 英文 编辑:程序博客网 时间:2024/05/17 23:01
使用虚拟机构建双节点Linux-HA实验
操作系统: CentOS7
虚拟机器: KVM
1. 系统安装
1.1 选择最小化安装
两虚拟机器分别命名为node1和node2
node1 192.168.122.100
node2 192.168.122.101
公共的虚拟IP为 node.cst 192.168.122.110
1.2 使用光盘文件作为软件仓库
挂在光盘文件到 /mnt目录
#/etc/yum.repos.d/CentOS-Media.repo
[c7-media]
name=CentOS-$releasever - Media
baseurl=file:///mnt/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
1.3 设置hosts文件
在 /etc/hosts中添加
192.168.122.100 node1
192.168.122.101 node2
192.168.122.110 node.cst
1.4 设置ssh互信
[node1:] ssh-keygen -t rsa -P ''
[node1:] ssh-copy-id -i .ssh/id_rsa.pub root@node2
[node2:] ssh-keygen -t rsa -P ''
[node2:] ssh-copy-id -i .ssh/id_rsa.pub root@node1
2. 软件安装与配置
2.1 安装pacemaker
[all:] yum -y install pcs policycoreutils-python fence-agents-all firwalld
2.2 配置防火墙
[all:] systemctl start firewalld.service
[all:] systemctl enable firewalld.service
[all:] firewall-cmd --permanent --add-service=high-availability
[all:] firewall-cmd --add-service=high-availability
[all:] firewall-cmd --reload
2.3 配置集群
[all:] systemctl start pcsd
[all:] systemctl enable pcsd
[all:] passwd hacluster -- 设置hacluster账户密码
[nodeX:] pcs cluster auth node1 node2 -- 集群节点认证,用户名为hacluster
[nodeX:] pcs cluster setup --name node.cst node1 node2
[nodeX:] pcs cluster enable --all
[nodeX:] pcs cluster status
2.4 配置fence_xvm作为fence_agent设备 -- 主机
[hosts:] yum -y install fence-virt*
[hosts:] mkdir -p /etc/cluster
[hosts:] dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
[hosts:] scp -r /etc/cluster/fence_xvm.key root@node1:/etc/cluster/fence_xvm.key
[hosts:] scp -r /etc/cluster/fence_xvm.key root@node2:/etc/cluster/fence_xvm.key
[hosts:] fence_virtd -c
#/etc/fence_virt.conf -- interface为与KVM虚拟机通信的设备
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
interface = "virbr0";
port = "1229";
family = "ipv4";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}
}
fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
2.5 启动主机端fence_xvm服务
[host:] systemctl enable fence_virtd
[host:] systemctl start fence_virtd
[host:] systemctl status fence_virtd
2.6 配置虚拟机fence_xvm设置
[all:] yum -y install fence-virt
[nodeX:] fence_xvm -o list #在客户机上获得当前所有虚拟机名,该虚拟机名称必须在/etc/hosts中有配置
[node1:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node2 -o status #获得node2状态
[node1:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node2 -o reboot #node2重新启动
[node2:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node1 -o status #获得node1状态
[node2:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node1 -o reboot #node1重新启动
2.7 设置HA的stonith
[nodeX:] pcs stonith create node.fence fence_xvm key_file=/etc/cluster/fence_xvm.key
[nodeX:] pcs stonith --full
[nodeX:] pcs property set stonith-enabled=true
[nodeX:] pcs stonith
[node2:] pcs stonith fence node1 #验证stonith,结果为node1重启
[node2:] pcs property --all | grep stonith-action
2.8 设置虚拟ip地址
设置群组 node.grp,在相同group内的资源在统一个节点上启动
[nodeX:] pcs resource create node.ip IPaddr2 ip=192.168.122.110 cidr_netmask=24 op monitor interval=30s --group node.grp
[nodeX:] pcs status
[all:] ip a
2.9 配置drbd
[all:] rpm -ivh drbd84-utils-8.9.6-1.el7.elrepo.x86_64.rpm
[all:] rpm -ivh kmod-drbd84-8.4.8-1_1.el7.elrepo.x86_64.rpm
[all:] echo "drbd" > /etc/modules-load.d/drdb.conf
[all:] firewall-cmd --permanent --zone=trusted --add-source=192.168.122.0/24
[all:] firewall-cmd --reload
[all:] firewall-cmd --zone=trusted --list-source
[all:] semanage permissive -a drbd_t #设置selinux
#/etc/drbd.d/node.db.res
resource node.db{
protocol C;
meta-disk internal;
device /dev/drbd0;
disk /dev/vdb;
syncer{
verify-alg sha1;
}
net{
allow-two-primaries;
}
on node1{
address 192.168.122.100:7789;
}
on node2{
address 192.168.122.101:7789;
}
}
2.10 启动drbd
[all:] drbdadm create-md node.db
[all:] modprobe drbd
[all:] drbdadm up node.db
[all:] cat /proc/drbd
[node1:] drbdadm primary --force node.db
[nodeX:] cat /proc/drbd
[node1:] mkfs.ext4 /dev/drbd0
[all] mkdir /node.fs
[node1:] mount /dev/drbd0 /node.fs
[node1:] echo "from node1" > /node.fs/test
[node1:] drbdadm secondary node.db
[node2:] drbdadm primary node.db
[node2:] mount /dev/drbd0 /node.fs
[node2:] cat /node.fs/test
[node2:] echo "from node2" >> /node.fs/test
2.11 设置drbd资源
[nodeX:] pcs cluster cib drbd_cfg
[nodeX:] pcs -f drbd_cfg resource create node.db ocf:linbit:drbd drbd_resource=node.db op monitor interval=60s --group node.grp
[nodeX:] pcs -f drbd_cfg resource master node.db.clone node.db master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true --group node.grp
[nodeX:] pcs -f drbd_cfg resource show
[nodeX:] pcs -f pcs cluster cib-push drbc_cfg
2.12 设置文件系统资源
[nodeX:] pcs cluster cib fs_cfg
[nodeX:] pcs -f fs_cfg resource create node.fs Filesystem device="/dev/drbd0" directory="/node.fs" fstype="ext4" --group node.grp
[nodeX:] pcs -f fs_cfg constraint colocation add node.fs with node.db.clone INFINITY with-rsc-role=Master
[nodeX:] pcs -f fs_cfg constraint order promote node.db.clone then start node.fs
[nodeX:] pcs -f fs_cfg constraint colocation add node.ip with node.db.clone INFINITY with-rsc-role=Master
[nodeX:] pcs -f fs_cfg resource show
[nodeX:] pcs -f pcs cluster cib-push fs_cfg
[nodeX:] pcs status
操作系统: CentOS7
虚拟机器: KVM
1. 系统安装
1.1 选择最小化安装
两虚拟机器分别命名为node1和node2
node1 192.168.122.100
node2 192.168.122.101
公共的虚拟IP为 node.cst 192.168.122.110
1.2 使用光盘文件作为软件仓库
挂在光盘文件到 /mnt目录
#/etc/yum.repos.d/CentOS-Media.repo
[c7-media]
name=CentOS-$releasever - Media
baseurl=file:///mnt/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
1.3 设置hosts文件
在 /etc/hosts中添加
192.168.122.100 node1
192.168.122.101 node2
192.168.122.110 node.cst
1.4 设置ssh互信
[node1:] ssh-keygen -t rsa -P ''
[node1:] ssh-copy-id -i .ssh/id_rsa.pub root@node2
[node2:] ssh-keygen -t rsa -P ''
[node2:] ssh-copy-id -i .ssh/id_rsa.pub root@node1
2. 软件安装与配置
2.1 安装pacemaker
[all:] yum -y install pcs policycoreutils-python fence-agents-all firwalld
2.2 配置防火墙
[all:] systemctl start firewalld.service
[all:] systemctl enable firewalld.service
[all:] firewall-cmd --permanent --add-service=high-availability
[all:] firewall-cmd --add-service=high-availability
[all:] firewall-cmd --reload
2.3 配置集群
[all:] systemctl start pcsd
[all:] systemctl enable pcsd
[all:] passwd hacluster -- 设置hacluster账户密码
[nodeX:] pcs cluster auth node1 node2 -- 集群节点认证,用户名为hacluster
[nodeX:] pcs cluster setup --name node.cst node1 node2
[nodeX:] pcs cluster enable --all
[nodeX:] pcs cluster status
2.4 配置fence_xvm作为fence_agent设备 -- 主机
[hosts:] yum -y install fence-virt*
[hosts:] mkdir -p /etc/cluster
[hosts:] dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
[hosts:] scp -r /etc/cluster/fence_xvm.key root@node1:/etc/cluster/fence_xvm.key
[hosts:] scp -r /etc/cluster/fence_xvm.key root@node2:/etc/cluster/fence_xvm.key
[hosts:] fence_virtd -c
#/etc/fence_virt.conf -- interface为与KVM虚拟机通信的设备
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
interface = "virbr0";
port = "1229";
family = "ipv4";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}
}
fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
2.5 启动主机端fence_xvm服务
[host:] systemctl enable fence_virtd
[host:] systemctl start fence_virtd
[host:] systemctl status fence_virtd
2.6 配置虚拟机fence_xvm设置
[all:] yum -y install fence-virt
[nodeX:] fence_xvm -o list #在客户机上获得当前所有虚拟机名,该虚拟机名称必须在/etc/hosts中有配置
[node1:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node2 -o status #获得node2状态
[node1:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node2 -o reboot #node2重新启动
[node2:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node1 -o status #获得node1状态
[node2:] fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H node1 -o reboot #node1重新启动
2.7 设置HA的stonith
[nodeX:] pcs stonith create node.fence fence_xvm key_file=/etc/cluster/fence_xvm.key
[nodeX:] pcs stonith --full
[nodeX:] pcs property set stonith-enabled=true
[nodeX:] pcs stonith
[node2:] pcs stonith fence node1 #验证stonith,结果为node1重启
[node2:] pcs property --all | grep stonith-action
2.8 设置虚拟ip地址
设置群组 node.grp,在相同group内的资源在统一个节点上启动
[nodeX:] pcs resource create node.ip IPaddr2 ip=192.168.122.110 cidr_netmask=24 op monitor interval=30s --group node.grp
[nodeX:] pcs status
[all:] ip a
2.9 配置drbd
[all:] rpm -ivh drbd84-utils-8.9.6-1.el7.elrepo.x86_64.rpm
[all:] rpm -ivh kmod-drbd84-8.4.8-1_1.el7.elrepo.x86_64.rpm
[all:] echo "drbd" > /etc/modules-load.d/drdb.conf
[all:] firewall-cmd --permanent --zone=trusted --add-source=192.168.122.0/24
[all:] firewall-cmd --reload
[all:] firewall-cmd --zone=trusted --list-source
[all:] semanage permissive -a drbd_t #设置selinux
#/etc/drbd.d/node.db.res
resource node.db{
protocol C;
meta-disk internal;
device /dev/drbd0;
disk /dev/vdb;
syncer{
verify-alg sha1;
}
net{
allow-two-primaries;
}
on node1{
address 192.168.122.100:7789;
}
on node2{
address 192.168.122.101:7789;
}
}
2.10 启动drbd
[all:] drbdadm create-md node.db
[all:] modprobe drbd
[all:] drbdadm up node.db
[all:] cat /proc/drbd
[node1:] drbdadm primary --force node.db
[nodeX:] cat /proc/drbd
[node1:] mkfs.ext4 /dev/drbd0
[all] mkdir /node.fs
[node1:] mount /dev/drbd0 /node.fs
[node1:] echo "from node1" > /node.fs/test
[node1:] drbdadm secondary node.db
[node2:] drbdadm primary node.db
[node2:] mount /dev/drbd0 /node.fs
[node2:] cat /node.fs/test
[node2:] echo "from node2" >> /node.fs/test
2.11 设置drbd资源
[nodeX:] pcs cluster cib drbd_cfg
[nodeX:] pcs -f drbd_cfg resource create node.db ocf:linbit:drbd drbd_resource=node.db op monitor interval=60s --group node.grp
[nodeX:] pcs -f drbd_cfg resource master node.db.clone node.db master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true --group node.grp
[nodeX:] pcs -f drbd_cfg resource show
[nodeX:] pcs -f pcs cluster cib-push drbc_cfg
2.12 设置文件系统资源
[nodeX:] pcs cluster cib fs_cfg
[nodeX:] pcs -f fs_cfg resource create node.fs Filesystem device="/dev/drbd0" directory="/node.fs" fstype="ext4" --group node.grp
[nodeX:] pcs -f fs_cfg constraint colocation add node.fs with node.db.clone INFINITY with-rsc-role=Master
[nodeX:] pcs -f fs_cfg constraint order promote node.db.clone then start node.fs
[nodeX:] pcs -f fs_cfg constraint colocation add node.ip with node.db.clone INFINITY with-rsc-role=Master
[nodeX:] pcs -f fs_cfg resource show
[nodeX:] pcs -f pcs cluster cib-push fs_cfg
[nodeX:] pcs status
0 0
- [linux] two-node-ha-centos
- centos/linux 安装node.js
- Linux--CentOS-安装Node.js
- Centos 6.5 Zabbix HA
- Centos搭建 TFS Nameserver HA
- centos系统高HA部署
- Linux-HA安裝手冊...
- Linux-HA安裝手冊
- LINUX+HA高可用
- Linux HA, MySQL cluster
- Linux-HA 入门指南
- Linux-HA Heartbeat Keepalived
- suse linux HA -Softdog
- Secondary NameNode,Checkpoint Node,Backup Node,HDFS HA
- centos(9) 网卡HA 的实现 --bond
- centos安装node.js
- centos 安装node.js
- CentOs安装node.js
- ansoft maxwell报错ansoft maxwell报错, 激励与激励相互重叠(excitation and excitation overlap
- 加密安装Kli Linux
- 大数据面试题解决方案
- C++继承总结
- Coursera Machine Learning 第一周 引言(Introduction)
- [linux] two-node-ha-centos
- 该铭记的道理
- 设计一个抽奖系统
- 坚持#第90天~感恩之心、利他之心!
- 输入一个整数,求该整数的二进制表达中有多少个1。例如输入10,由于其二进制表示为1010,有两个1,因此输出2。
- Android Studio制作简易计算器源代码及详解
- unix网络编程(第三版)中的unp.h
- 【图像处理】提取图片中直线的交点
- Objective-C对象初始化简要