lvs2

来源:互联网 发布:java语言用来开发哪些 编辑:程序博客网 时间:2024/05/16 11:05

高可用集群+调度器

server1:
yum install -y ldirectord-3.9.5-3.1.x86_64.rpm 可以对后端rs状态进行健康检查
ipvsadm -C 刷掉之前的策略,利用集群资源中的脚本来实现调度
cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/ 将ld文件加入到目录下

vim /etc/ha.d/ldirectord.cf ld的配置文件

virtual=172.25.31.100:80              ##定义lvs服务及其使用的vip和port        real=172.25.31.2:80 gate        ##定义realserver,也为80端口        real=172.25.31.3:80 gate        fallback=127.0.0.1:80 gate  ##如果调度rs失败,回切到本机做响应        service=http    ##使用http服务         scheduler=rr    ##调度算法为轮询        #persistent=600  ##持久连接超长时间        #netmask=255.255.255.255        protocol=tcp    ##虚拟服务用到的协议        checktype=negotiate    ##检查类型,协商        checkport=80           ##健康检查的端口        request="index.html"    ##检查rs用到的页面内容#       receive="Test Page"#       virtualhost=www.x.y.z

/etc/init.d/ldirectord start 开启ld服务
ipvsadm -l 查看ipvs表
/etc/init.d/httpd stop 将本机httpd停止
ip addr del 172.25.31.100/24 dev eth0 删除之前的vip

vim haresources 修改资源文件,加入lvs的健康检查

server1  IPaddr::172.25.31.100/24/eth0 httpd  ldirectord  ##资源组(vip ld httpd)

scp haresources ldirectord.cf 172.25.31.4:/etc/ha.d/ 将资源组传给server4
/etc/init.d/heartbeat start 开启heartbeat
ipvsadm -l 查看是否接管了资源

server4
yum install -y ldirectord-3.9.5-3.1.x86_64.rpm 下载ld,可以对后端rs状态进行健康检查
ll /etc/ha.d/ 查看资源组文件
/etc/init.d/heartbeat start 开启heartbeat
ipvsadm -l 查看ipvs表,查看资源的接管状态

测试:server2和server3的调度,和ld对rs的健康检查
server2: /etc/init.d/httpd stop 模拟后端rs故障
server3: /etc/init.d/httpd start

curl 172.25.31.100server3-www.westos.orgcurl 172.25.31.100server3-www.westos.org

server2: /etc/init.d/httpd start 模拟后端rs恢复

curl 172.25.31.100server2-www.westos.orgcurl 172.25.31.100server3-www.westos.org

测试: 资源的接管
server1: /etc/init.d/heartbeat stop 停止心跳
[kiosk@foundation31 Desktop]$ arp -an|grep 100 过滤100的主机arp缓存
? (172.25.31.100) at 52:54:00:b6:e0:73 [ether] on br0
[kiosk@foundation31 Desktop]$ arp -an|grep 100
? (172.25.31.100) at 52:54:00:2f:b1:55 [ether] on br0 server4已经接管到资源,可通过ip addr查看mac地址判断

keepalived

Keepalived – LVS管理软件
实现lvs+keepalived的高可用和负载均衡

server1(master):
/etc/init.d/heartbeat stop 停止heartbeat
chkconfig heartbeat off 开机不启动heartbeat

tar zxf keepalived-1.3.5.tar.gz 解压缩包
yum install gcc -y 安装编译环境需要的依赖包
yum install openssl-devel -y

cd keepalived-1.3.5
./configure –prefix=/usr/local/keepalived –with-init=SYSV 指定安装目录,和运行级别
make && make install 编译执行安装
cd /usr/local/
scp -r keepalived/ server4:/usr/local/ 将下载好的keepalived目录传给server4
ln -s /usr/local/keepalived/sbin/keepalived /sbin/ 将keepalived主程序加入到环境变量
ln -s /usr/local/keepalived/etc/keepalived/ /etc/
ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ keepalived启动脚本变量引用文件
ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ keepalived启动脚本,放到/etc/init.d/目录下就可以使用命令调用
chmod +x keepalived 加权限使脚本生效

cd /etc/keepalived/
vim keepalived.conf 配置在主负载均衡服务器上配置

global_defs {   notification_email { ##指定keepalived在发生事情的时候,发送邮件告知    root@localhost      ##指定为本地用户   }   notification_email_from keepalived@server1    ##指定发件人   smtp_server 127.0.0.1   ##发送email的smtp地址   smtp_connect_timeout 30  ##超时时间   router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0   vrrp_gna_interval 0}vrrp_instance VI_1 {    state MASTER     ##指定当前节点为主节点    interface eth0   ##绑定虚拟IP的网络接口    virtual_router_id 82 ##VRRP组名,两个节点的设置必须一样,以指明各个节点属于同一VRRP组    priority 100##主节点的优先级(1-254之间),备用节点必须比主节点优先级低    advert_int 1    authentication {   #设置验证信息,两个节点必须一致        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {  #指定虚拟IP, 两个节点设置必须一样    172.25.31.100        }}virtual_server 172.25.31.100 80 {    delay_loop 6    lb_algo rr    lb_kind DR   ##DR模式#   persistence_timeout 50    protocol TCP    real_server 172.25.31.2 80 {  #配置realaserver         weight 1    TCP_CHECK { #监控配置        connect_time 3        nb_get_retry 3        delay_before_retry 3    }    }    real_server 172.25.31.3 80 {        weight 1        TCP_CHECK {                connect_time 3                nb_get_retry 3                delay_before_retry 3}}}

/etc/init.d/keepalived start 启动keepalived服务
ip addr 查看vip是否在本机
tail -f /var/log/messages 监听日志,查看状态
scp keepalived.conf 172.25.31.4:/etc/keepalived/ 将主配置文件同步到server4备机上
ipvsadm -l 查看ipvs表,查看资源的接管状态
iptables -L 查看防火墙策略
iptables -F 由于版本问题,需刷掉自动生成的策略

server4(backup):
/etc/init.d/heartbeat stop 停止heartbeat
chkconfig heartbeat off 开机不启动heartbeat

ln -s /usr/local/keepalived/sbin/keepalived /sbin/ 同server1做软连接
ln -s /usr/local/keepalived/etc/keepalived/ /etc/
ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cd /usr/local/keepalived/etc/rc.d/init.d/
chmod +x keepalived 加权限使脚本生效

vim etc/keepalived/keepalived.conf BACKUP服务器配置

! Configuration File for keepalivedglobal_defs {   notification_email {    root@localhost   }   notification_email_from keepalived@server4  ##指定keepalived在发生事情的时候,发送邮件告知   smtp_server 127.0.0.1   smtp_connect_timeout 30   router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0   vrrp_gna_interval 0}vrrp_instance VI_1 {    state BACKUP   ##修改为BACKUP    interface eth0    virtual_router_id 82    priority 50    ##优先级需低于master    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {    172.25.31.100        }}virtual_server 172.25.31.100 80 {    delay_loop 6    lb_algo rr    lb_kind DR#   persistence_timeout 50    protocol TCP    real_server 172.25.31.2 80 {        weight 1    TCP_CHECK {        connect_time 3        nb_get_retry 3        delay_before_retry 3    }    }    real_server 172.25.31.3 80 {        weight 1        TCP_CHECK {                connect_time 3                nb_get_retry 3                delay_before_retry 3}}}

/etc/init.d/keepalived start 启动keepalived
ipvsadm -l 查看lvs状态

server2/3(RealServer):
/etc/init.d/httpd restart 将rs的httpd服务开启

测试:实现负载均衡

[root@foundation31 Desktop]# curl 172.25.31.100    ##发送http请求  server2-www.westos.org[root@foundation31 Desktop]# curl 172.25.31.100    ##发送http请求  server3-www.westos.org[root@foundation31 Desktop]# curl 172.25.31.100    ##发送http请求  server2-www.westos.org

停server1(Master)服务器的keepalived服务,查看server4(BAKCUP)服务器是否能正常接管服务

 [root@server1 keepalived]# ipvsadm -Ln   ##查看master的lvs状态IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server4 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 52:54:00:b6:e0:73 brd ff:ff:ff:ff:ff:ff    inet 172.25.31.4/24 brd 172.25.31.255 scope global eth0    inet 172.25.31.100/32 scope global eth0    inet6 fe80::5054:ff:feb6:e073/64 scope link        valid_lft forever preferred_lft forever
[root@server4 ~]# ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  172.25.31.100:80 rr  -> 172.25.31.2:80               Route   1      0          0           -> 172.25.31.3:80               Route   1      0          0         

测试: 资源接管后的负载均衡

[root@foundation31 Desktop]# curl 172.25.31.100    server2-www.westos.org[root@foundation31 Desktop]# curl 172.25.31.100    server3-www.westos.org[root@foundation31 Desktop]# curl 172.25.31.100   server2-www.westos.org
原创粉丝点击