cluster集群的搭建与使用

来源:互联网 发布:中医看舌头知病情图片 编辑:程序博客网 时间:2024/06/06 16:00

初始环境的配置

用作集群的几个虚拟机最好时比较纯净的环境,还有必须要时间同步

1.首先配置好yum源以用来安装所需要的ricci服务与luci服务
配置的yum源添加部分如下

[HighAvailability]  (高可用)name=HighAvailabilitybaseurl=http://172.25.39.250/rhel6.5/HighAvailabilitygppcheck=0[LoadBalancer]      (负载均衡)name=LoadBalancerbaseurl=http://172.25.39.250/rhel6.5/LoadBalancergpgcheck=0[ResilientStorage]  (存储)name=ResilientStoragebaseurl=http://172.25.39.250/rhel6.5/ResilientStoragegpgcheck=0[ScalableFileSystem]    (大文件系统支持)name=ScalableFileSystembaseurl=http://172.25.39.250/rhel6.5/ScalableFileSystemgpgcheck=0
yum repolist #刷新yum仓库

*repo id repo name status
*HighAvailability HighAvailability 56
*LoadBalancer LoadBalancer 4
*ResilientStorage ResilientStorage 62
*ScalableFileSystem ScalableFileSystem 7
*rhel6.5 Red Hat Enterprise Linux 3,690
*repolist: 3,819


2.几个集群的虚拟机中都需要安装ricci服务
yum install -y ricci
passwd ricci #修改ricci密码,最好密码设置为一致

  1. /etc/init.d/ricci start ##开启服务
    chkconfig ricci on ##开机自启动

4.clustat #用于测试查看集群当前状态

5.用于提供luci图形管理界面的主机需要安装luci服务具体操作如下

  yum install -y luci   ##图形操作界面luci服务安装  /etc/init.d/luci start  chkconfig luci on clustat

在浏览器:
https://172.25.40.1:8084 #安装了luci的主机,服务使用的端口为8084.
进入图形界面后点击create将几个集群节点往里头添加即可完成基本的搭建。

fence的配置过程

在真机山如下操作
1.用合适的镜像yum源,安装以下所需软件包

    yum install fence-virtd-multicast.x86_64  fence-virtd.x86_64 fence-virtd-libvirt.x86_64 -y(网络监听器、电源管理器、内核层面控制虚拟机,关掉的话用户空间不能访问)

2.修改fence的配置:

  fence_virtd -c

以下是出现的信息,部分直接回车即可

Module search path [/usr/lib64/fence-virt]: Available backends:    libvirt 0.1Available listeners:    multicast 1.2Listener modules are responsible for accepting requestsfrom fencing clients.Listener module [multicast]: The multicast listener module is designed for use environmentswhere the guests and hosts may communicate over a network usingmulticast.The multicast address is the address that a client will use tosend fencing requests to fence_virtd.Multicast IP Address [225.0.0.12]: Using ipv4 as family.Multicast IP Port [1229]: Setting a preferred interface causes fence_virtd to listen onlyon that interface.  Normally, it listens on all interfaces.In environments where the virtual machines are using the hostmachine as a gateway, this *must* be set (typically to virbr0).Set to 'none' for no interface.Interface [virbr0]: br0The key file is the shared key information which is used toauthenticate fencing requests.  The contents of this file mustbe distributed to each physical host and virtual machine withina cluster.Key File [/etc/cluster/fence_xvm.key]: Backend modules are responsible for routing requests tothe appropriate hypervisor or management layer.Backend module [libvirt]: Configuration complete.=== Begin Configuration ===backends {    libvirt {        uri = "qemu:///system";    }}listeners {    multicast {        port = "1229";        family = "ipv4";        interface = "br0";        address = "225.0.0.12";        key_file = "/etc/cluster/fence_xvm.key";    }}fence_virtd {    module_path = "/usr/lib64/fence-virt";    backend = "libvirt";    listener = "multicast";}=== End Configuration ===Replace /etc/fence_virt.conf with the above [y/N]? y
  1. a) cd /etc/cluster
    b) dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

  2. systemctl restart fence_virtd.service (确保火墙允许服务iptables -L)

    netstat -anulp | grep :1229
    udp 0 0 0.0.0.0:1229 0.0.0.0:* 8163/fence_virtd

  3. scp fence_xvm.key root@172.25.39.1:/etc/cluster/

    scp fence_xvm.key root@172.25.39.4:/etc/cluster/

配置过后可以在/etc/cluster目录下的配置文件来查看配置的情况

cd /etc/cluster/cat cluster.conf
<?xml version="1.0"?><cluster config_version="7" name="clus">    <clusternodes>        <clusternode name="server1" nodeid="1">            <fence>                <method name="fence1">                    <device domain="vm1" name="vmfence"/>                </method>            </fence>        </clusternode>        <clusternode name="server4" nodeid="2">            <fence>                <method name="fence4">                    <device domain="vm4" name="vmfence"/>                </method>            </fence>        </clusternode>    </clusternodes>    <cman expected_votes="1" two_node="1"/>    <fencedevices>        <fencedevice agent="fence_xvm" name="vmfence"/>    </fencedevices></cluster>

两个集群主机上安装apache服务

yum install -y httpdvim /var/www/html/index.html  

然后使用浏览器对其进行测试

网络设备iscis的配置

目的在于添加存储设备,以供两个集群的共享使用

在用于提供设备的主机上如下配置

 yum install -y iscsi-* #安装iscsi所需要的服务vim /etc/tgt/targets.conf  #编写服务的配置文件如下
(38行)<target iqn.2007-07.com.example:server.target1>             backing-store /dev/vdb  #存储设备             initiator-address 172.25.40.1 #ACL             initiator-address 172.25.40.4         </target>

编写完毕后推出保存

/etc/init.d/tgtd start  #开启服务tgt-admin -s        ##测试配置文件修改是否生效

在集群上如下配置

yum install -y iscsi-*  #安装iscsi的输出端

然后搜索网络设备

iscsiadm -m discovery -t st -p 172.25.39.2

然后登录到搜索到的设备上

iscsiadm -m node -l

成功登陆上设备时,会多出一个sda。
可以使用fdisk -l来查看
比较保险是cat /proc/partions(内核级别)
然后对这个sda进行分区,开始构造逻辑卷组

fdisk -cu /dev/sda   ##给sda分区 n->1 t ->8e->wqvim /etc/lvm/lvm.conf   ##里面的状态会变成集群状态

然后开始构造逻辑卷,期间必须确保几个主机间是同步的

pvcreate /dev/sda1vgcreate clustervg /dev/sda1 lvcreate -L +2G -n demo clustervg mkfs.ext4 /dev/clustervg/demo  ##格式化为ext4文件系统

最后检测

clustat ##查看集群的信息
原创粉丝点击