Lvs+Ldirectord+Heartbeat
来源:互联网 发布:万信达软件官网 编辑:程序博客网 时间:2024/06/05 18:56
了解
1.Lvs_dr工作模式
2.DR模式:直连路由
client–Lvs–RS–client
3.lvs和iptables定义的防火墙策略谁优先
netfilter
4.ospf路由协议:工作在三层
5.raid0|raid1|raid5|raid10|
6.drdb共享存储
LVS实现负载均衡
配置yum源,ipvsadm包存在于LoadBalancer
Lvs端:Server1
**安装ipvsadm|添加策略|设置VIP
Ipvsadm参数: -C 清空策略 -L 查看集群 -A 设定调度器 -a 添加集群 -t 指定为tcp协议 -s 指定调度算法 -r -g 指定为DR工作模式[root@server1 ~]# ipvsadm -A -t 172.25.66.100:80 -s rr[root@server1 ~]# ipvsadm -a -t 172.25.66.100:80 -r 172.25.66.2:80 -g[root@server1 ~]# ipvsadm -a -t 172.25.66.100:80 -r 172.25.66.3:80 -g[root@server1 ~]# ipvsadm -LIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.25.66.100:http rr -> server2:http Route 1 0 0 -> server3:http Route 1 0 0 [root@server1 ~]# /etc/init.d/ipvsadm saveipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ][root@server1 ~]# ip addr add 172.25.66.100/24 dev eth1 #添加虚拟IP[root@server1 ~]#
客户端:Server2|Server3
*安装htpd写测试页面|配置VIP|设置arp抑制Arp抑制:在client访问时,后端会通过广播的形式回应,因此要做arp抑制[root@server2 ~]# yum install arptables_jf.x86_64 -y[root@server2 ~]# arptables -A IN -d 172.25.66.100 -j DROP #近来丢弃[root@server2 ~]# arptables -A OUT -s 172.25.66.100 -j mangle --mangle-ip-s 172.25.66.2 #出去伪装[root@server2 ~]# /etc/init.d/arptables_jf save #保存策略Saving current rules to /etc/sysconfig/arptables: [ OK ][root@server2 ~]#
测试:
[root@foundation66 ~]# arp -d 172.25.66.100 #清空缓存[root@foundation66 ~]# arp -a 172.25.66.100 #查看命中的MAC地址[root@foundation66 days3]# for i in range {1..10};do curl 172.25.66.100;done<h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1>[root@foundation66 days3]# arp -an | grep 100 #查看虚拟IP对应的主机地址? (172.25.66.100) at 52:54:00:0f:0f:8f [ether] on br0[root@foundation66 days3]#
ldirectord:添加健康检查机制
*lvs没有能力做后端健康检查*ldirectord为lvs做后端补充: 自己生成规则|在配置文件中自动添加规则,不需要用ipvsadm生成 进行后端检测,并且效率高*启动资源:VIP|ldirectord(更轻量级keepalived)没有健康检查,很容易造成后端负载均衡的失败,所以需要健康检查。ldirectord守护进程通过向每台真实服务器真实IP(RIP)上的集群资源发送访问请求来实现对真实服务器的监控**一旦real server的相关服务死掉,或者网卡坏掉的话,调度器将不会再将客户的请求定向到该real server上。[root@server1 ~]# lsldirectord-3.9.5-3.1.x86_64.rpm[root@server1 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y[root@server1 ~]# rpm -q ldirectord-3.9.5-3.1.x86_64.rpm package ldirectord-3.9.5-3.1.x86_64.rpm is not installed[root@server1 ~]# rpm -ql ldirectord-3.9.5-3.1/etc/ha.d/etc/ha.d/resource.d/etc/ha.d/resource.d/ldirectord/etc/init.d/ldirectord/etc/logrotate.d/ldirectord/usr/lib/ocf/resource.d/heartbeat/ldirectord/usr/sbin/ldirectord/usr/share/doc/ldirectord-3.9.5/usr/share/doc/ldirectord-3.9.5/COPYING/usr/share/doc/ldirectord-3.9.5/ldirectord.cf/usr/share/man/man8/ldirectord.8.gz[root@server1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/[root@server1 ~]# cd /etc/ha.d/[root@server1 ha.d]# lsldirectord.cf resource.d shellfuncs[root@server1 ha.d]# vim ldirectord.cf ...... 25 virtual=172.25.66.100:80 26 real=172.25.66.2:80 gate 27 real=172.25.66.3:80 gate 28 fallback=127.0.0.1:80 gate #后端都宕机时,访问本地 29 service=http 30 scheduler=rr 31 #persistent=600 32 #netmask=255.255.255.255 33 protocol=tcp 34 checktype=negotiate 35 checkport=80 36 request="index.html" 37 #receive="Test Page" 38 #virtualhost=www.x.y.z.....[root@server1 ha.d]# ipvsadm -lIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.25.66.100:http rr -> server2:http Route 1 0 0 -> server3:http Route 1 0 0 [root@server1 ha.d]# ipvsadm -C[root@server1 ha.d]# ipvsadm -lIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn[root@server1 ha.d]# cat /var/www/html/index.html 正在维护中.....[root@server1 ha.d]# /etc/init.d/ldirectord startStarting ldirectord... success[root@server1 ha.d]# ipvsadm -LIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.25.66.100:http rr -> server2:http Route 1 0 0 -> server3:http Route 1 0 0 [root@server1 ha.d]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.25.66.100:80 rr -> 172.25.66.2:80 Route 1 0 0 -> 172.25.66.3:80 Route 1 0 0 [root@server1 ha.d]#
测试
[root@foundation66 Desktop]# for i in range {1..10};do curl 172.25.66.100;done<h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1><h1>server2<h1><h1>server3<h1>***Server2停掉httpd[root@foundation66 Desktop]# for i in range {1..5};do curl 172.25.66.100;done<h1>server3<h1><h1>server3<h1><h1>server3<h1><h1>server3<h1><h1>server3<h1><h1>server3<h1>[root@foundation66 Desktop]# ***Server2|Server3都停掉httpd[root@foundation66 Desktop]# for i in range {1..5};do curl 172.25.66.100;done正在维护中.....正在维护中.....正在维护中.....正在维护中.....正在维护中.....正在维护中.....[root@foundation66 Desktop]#
Heartbeat实现高可用
主
#配置yum源[root@server1 ~]# lsheartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpmheartbeat-devel-3.0.4-2.el6.x86_64.rpm ldirectord-3.9.5-3.1.x86_64.rpm[root@server1 ~]# yum install heartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpm heartbeat-devel-3.0.4-2.el6.x86_64.rpm -y[root@server1 ~]# rpm -q heartbeat -d[root@server1 ~]# cd /usr/share/doc/heartbeat-3.0.4/[root@server1 heartbeat-3.0.4]# lsapphbd.cf AUTHORS COPYING ha.cf READMEauthkeys ChangeLog COPYING.LGPL haresources[root@server1 heartbeat-3.0.4]# cp authkeys haresources ha.cf /etc/ha.d/[root@server1 heartbeat-3.0.4]# cd /etc/ha.d/[root@server1 ha.d]# chmod 600 authkeys [root@server1 ha.d]# lsauthkeys harc ldirectord.cf README.config shellfuncsha.cf haresources rc.d resource.d[root@server1 ha.d]# vim ha.cf ..... 34 logfacility local0 48 keepalive 2 56 deadtime 30 61 warntime 10 71 initdead 60 76 udpport 753157 auto_failback on211 node server1 #哪个在先哪个是主212 node server2220 ping 172.25.66.250253 respawn hacluster /usr/lib64/heartbeat/ipfail259 apiauth ipfail gid=haclient uid=hacluster[root@server1 ha.d]# vim authkeys ..... 23 auth 1 24 1 crc.....[root@server1 ha.d]# vim haresources .....***定义虚拟IP|ldirectord|httpd(启动本机httpd)150 server1 IPaddr::172.25.66.100/24/eth1 ldirectord httpd .....[root@server1 ha.d]# /etc/init.d/heartbeat restart[root@server1 ha.d]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:0f:0f:8f brd ff:ff:ff:ff:ff:ff inet 172.25.66.1/24 brd 172.25.66.255 scope global eth1 inet 172.25.0.100/24 scope global eth1 inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth1 inet6 fe80::5054:ff:fe0f:f8f/64 scope link valid_lft forever preferred_lft forever[root@server1 ha.d]# /etc/init.d/httpd statushttpd (pid 5861) is running...[root@server1 ha.d]#
备
[root@server1 ~]# scp heartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-devel-3.0.4-2.el6.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpm root@172.25.66.4:/root/[root@server1 ~]# scp /etc/yum.repos.d/rhel-source.repo root@172.25.66.4:/etc/yum.repos.d/[root@server4 ~]# lsheartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpmheartbeat-devel-3.0.4-2.el6.x86_64.rpm ldirectord-3.9.5-3.1.x86_64.rpm[root@server4 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y
测试:
[root@server1 ha.d]# /etc/init.d/heartbeat restartStopping High-Availability services: Done.Waiting to allow resource takeover to complete:Done.Starting High-Availability services: INFO: Resource is stoppedDone.[root@server1 ha.d]# ipvsadm -lIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.25.66.100:http rr -> server2:http Route 1 0 0 -> server3:http Route 1 0 0 [root@server1 ~]# /etc/init.d/heartbeat stopStopping High-Availability services: Done.[root@server1 ~]# #完成VIP迁移[root@server4 ~]# /etc/init.d/heartbeat statusheartbeat OK [pid 5007 et al] is running on server4 [server4]...[root@server4 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:8c:51:47 brd ff:ff:ff:ff:ff:ff inet 172.25.66.4/24 brd 172.25.66.255 scope global eth1 inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth1 inet6 fe80::5054:ff:fe8c:5147/64 scope link valid_lft forever preferred_lft forever[root@server4 ~]# 参考文档: http://blog.csdn.net/zhoumengyang1993/article/details/78481259
Drbd共享存储
参考文档:http://freeloda.blog.51cto.com/2033581/1275384 drdb配置.pdf[root@server1 ~]# yum install gcc flex rpm-build kernel-devel -y[root@server1 ~]# tar zxf drbd-8.4.3.tar.gz [root@server1 ~]# cd drbd-8.4.3[root@server1 drbd-8.4.3]# ./configure --with-km --enable-spec[root@server1 ~]# cd rpmbuild/[root@server1 rpmbuild]# lsBUILD BUILDROOT RPMS SOURCES SPECS SRPMS[root@server1 rpmbuild]# cd[root@server1 ~]# cp drbd-8.4.3.tar.gz rpmbuild/SOURCES/[root@server1 ~]# cd drbd-8.4.3[root@server1 drbd-8.4.3]# rpmbuild -bb drbd.spec[root@server1 drbd-8.4.3]# rpmbuild -bb drbd-km.spec[root@server1 drbd-8.4.3]# cd ~/rpmbuild/RPMS/x86_64/[root@server1 x86_64]# lsdrbd-8.4.3-2.el6.x86_64.rpmdrbd-bash-completion-8.4.3-2.el6.x86_64.rpmdrbd-heartbeat-8.4.3-2.el6.x86_64.rpmdrbd-km-2.6.32_431.el6.x86_64-8.4.3-2.el6.x86_64.rpmdrbd-pacemaker-8.4.3-2.el6.x86_64.rpmdrbd-udev-8.4.3-2.el6.x86_64.rpmdrbd-utils-8.4.3-2.el6.x86_64.rpmdrbd-xen-8.4.3-2.el6.x86_64.rpm[root@server1 x86_64]# rpm -ivh * Preparing... ########################################### [100%] 1:drbd-utils ########################################### [ 13%] 2:drbd-bash-completion ########################################### [ 25%] 3:drbd-heartbeat ########################################### [ 38%] 4:drbd-pacemaker ########################################### [ 50%] 5:drbd-udev ########################################### [ 63%] 6:drbd-xen ########################################### [ 75%] 7:drbd ########################################### [ 88%] 8:drbd-km-2.6.32_431.el6.########################################### [100%][root@server1 x86_64]#
在server4上安装
[root@server1 x86_64]# scp * root@172.25.66.4:/root/[root@server1 ~]# cd /etc/drbd.d/[root@server1 drbd.d]# vim demo.res.....resource demo {meta-disk internal;device /dev/drbd1;syncer {verify-alg sha1;}net {allow-two-primaries;}on server1 {disk /dev/vdb;address 172.25.66.1:7789;}on server4 {disk /dev/vdb;address 172.25.66.4:7789;}}.....[root@server1 drbd.d]# lsdemo.res global_common.conf[root@server1 drbd.d]# scp demo.res server4:/etc/drbd.d/*两边都做初始化|启动[root@server1 drbd.d]# drbdadm create-md demo[root@server1 drbd.d]# /etc/init.d/drbd startStarting DRBD resources: [ create res: demo prepare disk: demo adjust disk: demo adjust net: demo].......[root@server1 drbd.d]#[root@server1 drbd.d]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101)GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@server1, 2017-09-24 16:32:01 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508[root@server1 drbd.d]#
设置主从,让其同步
[root@server1 ~]# drbdsetup /dev/drbd1 primary --force #设置主节[root@server1 ~]# cat /proc/drbd #数据同步|可看到此时的设备是主还是备version: 8.4.3 (api:1/proto:86-101)GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@server1, 2017-09-24 16:32:01 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:1048508 nr:0 dw:0 dr:1049172 al:0 bm:64 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0[root@server1 ~]# mkfs.ext4 /dev/drbd1 #创建文件系统[root@server1 ~]# mount /dev/drbd1 /var/www/html/[root@server1 ~]# cp /etc/passwd /var/www/html/[root@server1 ~]# ls /var/www/html/lost+found passwd[root@server1 ~]# ...
共享存储:
NAS:SAN:nfs:文件级别共享块设备:只有块设备才能做格式化集群文件系统:(只能在文件系统中使用) GFS2|OCFS2|clvm2(逻辑卷:实现文件动态扩展,但是恢复起来难) 两节点先构建成高可用集群,定义成分布式锁管理器(实现让多个进程之间共享文件,实现分布式锁的功能)http://wuhf2015.blog.51cto.com/8213008/1654648
阅读全文
0 0
- heartbeat+ldirectord+lvs nat
- heartbeat+LVS+ldirectord
- Heartbeat+lvs+ldirectord
- Lvs+Ldirectord+Heartbeat
- Lvs+Ldirectord+Heartbeat
- HA + LVS + ipvsadm + heartbeat-ldirectord
- Heartbeat+DRDB+LVS+Keepalived+Ldirectord
- centos 5.2 lvs+heartbeat+ldirectord集群
- lvs+heartbeat+ldirectord(centos5 测试通过)
- Centos5.5下lvs+heartbeat+ldirectord
- Heartbeat+ipvsadm+ldirectord高可用双机lvs
- Heartbeat,LVS ,Keepalived,Ldirectord功能及配置
- Heartbeat + LVS + ldirectord构建可伸缩网络服务(1)
- LVS+heartbeat+ldirectord高可用负载均衡集群解决方案
- lvs+ldirectord
- VMWare虚拟机环境下的Linux服务器集群 - 使用LVS+Heartbeat+Ldirectord (2)LVS脚本编写
- VMWare虚拟机环境下的Linux服务器集群 - 使用LVS+Heartbeat+Ldirectord (1)VMWare虚拟机设置
- CentOS5.5下Heartbeat+LVS(VS/DR)+Ldirectord 分步骤实验
- 机器学习笔记(V)线性模型(I)一维最小二乘法
- 关于c++和C的getXXX系列函数
- DataStructureProject(3)
- linux基本操作---用户以及文件权限
- Dagger2 同时引用多个 Module 的三种写法
- Lvs+Ldirectord+Heartbeat
- canvas雨滴绘制总结(三)
- IO流(12)--装饰设计模式
- 替换死亡节点(三)
- Latex 公式太长等号对齐,argmin 符号在正下方,
- Web Api + Token + 签名
- sort 命令详解
- 关于校赛
- 大数据正式22