集群之RHCS

来源:互联网 发布:安装ubuntu后无法上网 编辑:程序博客网 时间:2024/06/07 06:02

RHCS(红帽集群套件)
目标:利用Luci/Ricci实现web集群
准备:集群节点1 —>172.25.30.1(server1)
集群节点2 —>172.25.30.4(server4)

一 配置

1.配置yum源

vim /etc/yum.repos/rhel-source.repo [rhel-source]name=Red Hat Enterprise Linux $releasever - $basearch - Sourcebaseurl=http://172.25.31.250/rhel6.5enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[HighAvailability]      ##高可用 name=HighAvailabilitybaseurl=http://172.25.30.250/rhel6.5/HighAvailabilitygpgcheck=0[LoadBalancer]        ##负载均衡name=LoadBalancerbaseurl=http://172.25.30.250/rhel6.5/LoadBalancergpgcheck=0[ResilientStorage]     ##弹性存储name=ResilientStoragebaseurl=http://172.25.30.250/rhel6.5/ResilientStoragegpgcheck=0[ScalableFileSystem]   ##可伸缩系统文件name=ScalableFileSystembaseurl=http://172.25.31.250/rhel6.5/ScalableFileSystemgpgcheck=0

2.安装套件

yum install -y riccipasswd ricci             ##初始化密码/etc/init.d/ricci  start    ##启动服务chkconfig ricci on        ##开机自启动yum install -y luci       /etc/init.d/luci start      ##启动服务chkconfig luci on        ##开机自启动

3.配置server4

scp rhel-source.repo 172.25.31.4:/etc/yum.repos.d/yum install -y riccipasswd ricci             /etc/init.d/ricci  startchkconfig ricci on

4.server1、server4要有DNS解析

5.测试

访问https://172.25.31.1:8084
这里写图片描述
点击Create,写入两个主机名,创建为两个节点主机
这里写图片描述
节点创建完成,在节点1、节点2的/etc/cluster下会生成cluster.conf文件。

二 高可用集群

准备:fence-virtd-libvirt.x86_64、创建fence设备
1.物理机上配置

[root@foundation30 ~]#yum install -y fence-virtd-libvirt.x86_64 rpm -qa | grep fence  ##所需要安装的fence包fence-virtd-multicast-0.3.2-2.el7.x86_64libxshmfence-1.2-1.el7.x86_64fence-virtd-0.3.2-2.el7.x86_64fence-virtd-libvirt-0.3.2-2.el7.x86_64fence_virtd -c          ##fence的设置Module search path [/usr/lib64/fence-virt]:   ##模块查询路径Available backends:    libvirt 0.1Available listeners:    multicast 1.2Listener modules are responsible for accepting requestsfrom fencing clients.Listener module [multicast]:   ##多广播模式The multicast listener module is designed for use environmentswhere the guests and hosts may communicate over a network usingmulticast.The multicast address is the address that a client will use tosend fencing requests to fence_virtd.Multicast IP Address [225.0.0.12]:      ##监听的ip地址,默认Using ipv4 as family.Multicast IP Port [1229]:                ##多播的端口Setting a preferred interface causes fence_virtd to listen onlyon that interface.  Normally, it listens on all interfaces.In environments where the virtual machines are using the hostmachine as a gateway, this *must* be set (typically to virbr0).Set to 'none' for no interface.Interface [virbr0]: br0             ##选用br0桥接The key file is the shared key information which is used toauthenticate fencing requests.  The contents of this file mustbe distributed to each physical host and virtual machine withina cluster.Key File [/etc/cluster/fence_xvm.key]:   ##key文件的生成Backend modules are responsible for routing requests tothe appropriate hypervisor or management layer.Backend module [libvirt]:     ##后端模块libvirtConfiguration complete.=== Begin Configuration ===backends {    libvirt {        uri = "qemu:///system";    }}listeners {    multicast {        port = "1229";        family = "ipv4";        interface = "br0";        address = "225.0.0.12";        key_file = "/etc/cluster/fence_xvm.key";    }}fence_virtd {    module_path = "/usr/lib64/fence-virt";    backend = "libvirt";    listener = "multicast";}=== End Configuration ===Replace /etc/fence_virt.conf with the above [y/N]? y    ##确认[root@foundation30 ~]# mkdir /etc/cluster/ [root@foundation30 ~]# ll -d /etc/cluster/ drwxr-xr-x 2 root root 6 Jul 24 10:27 /etc/cluster/[root@foundation30 ~]# ll /dev/urandom crw-rw-rw- 1 root root 1, 9 Jul 24 09:00 /dev/urandom[root@foundation30 ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key  bs=128  count=1生成密钥文件[root@foundation30 ~]# cd /etc/cluster/[root@foundation30 cluster]# lltotal 4-rw-r--r-- 1 root root 128 Jul 24 10:28 fence_xvm.key[root@foundation30 cluster]# systemctl restart fence_virtd[root@foundation30 cluster]# systemctl status fence_virtd[root@foundation30 cluster]# netstat -anulp |grep :1229udp        0      0 0.0.0.0:1229            0.0.0.0:*                           8862/fence_virtd    [root@foundation30 cluster]# iptables -LChain INPUT (policy ACCEPT)target     prot opt source               destination         ACCEPT     udp  --  anywhere             anywhere             udp dpt:domainACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domainACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpsACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootpsACCEPT     udp  --  anywhere             anywhere             udp dpt:domainACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domainACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpsACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootpsscp fence_xvm.key root@172.25.30.1:/etc/cluster/scp fence_xvm.key root@172.25.31.4:/etc/cluster/

2.查看
server1/server4:
[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf cman-notify.d fence_xvm.key

3.创建fence设备

[root@server1 cluster]# cat cluster.conf    ##查看配置是否写入<?xml version="1.0"?><cluster config_version="2" name="pucca">    <clusternodes>        <clusternode name="server1" nodeid="1"/>        <clusternode name="server4" nodeid="2"/>    </clusternodes>    <cman expected_votes="1" two_node="1"/>    <fencedevices>        <fencedevice agent="fence_xvm" name="vmfence"/>    </fencedevices></cluster>

页面:
Nodes:server1:Add Fence Method:fence1 Add Fence Instance:vmfence,Domain:UUID
提交,cat cluster.conf ##查看配置是否写入

测试:
[root@server1 cluster]# fence_node server4 ##将server4停止,server5断电
[root@server4 ~]# ip link set eth0 down ##网卡down掉,立马断电重启

三 负载均衡
server1:

yum install -y httpd/etc/init.d/httpd startvim /var/www/html/index.html[root@server1 html]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 52:54:00:2f:b1:55 brd ff:ff:ff:ff:ff:ff    inet 172.25.31.1/24 brd 172.25.31.255 scope global eth0    inet 172.25.31.100/24 scope global secondary eth0    inet6 fe80::5054:ff:fe2f:b155/64 scope link        valid_lft forever preferred_lft forever[root@server1 html]# clustatCluster Status for pucca @ Mon Jul 24 11:46:06 2017Member Status: Quorate Member Name                             ID   Status ------ ----                             ---- ------ server1                                     1 Online, Local, rgmanager server4                                     2 Online, rgmanager Service Name                   Owner (Last)                   State          ------- ----                   ----- ------                   -----          service:apache                 server1                        started  

测试:
[root@server1 html]# /etc/init.d/httpd stop
[root@server1 html]# clustat
[root@server4 html]# clustat
Cluster Status for pucca @ Mon Jul 24 11:54:42 2017
Member Status: Quorate

Member Name ID Status
—— —- —- ——
server1 1 Online, rgmanager
server4 2 Online, Local, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:apache server4 started
[root@server4 html]# echo c > /proc/sysrq-trigger

四iscsi文件系统
server2:+8G虚拟磁盘
[root@server2 ~]# yum install -y scsi-*
[root@server2 ~]# vim /etc/tgt/targets.conf

原创粉丝点击