RHCS + Mysql安装配置

来源:互联网 发布:mac usb3.0 移动硬盘 编辑:程序博客网 时间:2024/05/11 01:09



 

 

一、前期规划

1、IP分配


主机名IP安装软件Node1192.168.52.10luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsiNode2192.168.52.11luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsiStorage192.168.52.110Openfiler共享存储Vip1192.168.52.224 Vip2192.168.52.250 
2、系统拓扑图 


二、RHCS安装准备--基础环境:

(以下步骤Node1、Node2都要操作)

1、添加Hosts

# more /etc/hosts

127.0.0.1   localhost localhost.localdomainlocalhost4 localhost4.localdomain4

::1         localhostlocalhost.localdomain localhost6 localhost6.localdomain6

192.168.52.10 node1    node1.kbson.com

192.168.52.11 node2    node2.kbson.com

2、创建双机互信

# mkdir ~/.ssh

# chmod 700 ~/.ssh

# ssh-keygen -t rsa

enter

enter

enter

# ssh-keygen -t dsa

enter

enter

enter

 

Node1执行

# cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

 

# ssh node2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

yes

Node2的密码

 

# scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys

3、配置本地yum源

# mount /dev/cdrom /media/

mount: block device /dev/sr0 is write-protected, mounting read-only

#

# more /etc/yum.repos.d/rhel-source.repo

[rhel_6_iso]

name=local iso

baseurl=file:///media

gpgcheck=1

gpgkey=file:///media/RPM-GPG-KEY-redhat-release

[HighAvailability]

name=HighAvailability

baseurl=file:///media/HighAvailability

gpgcheck=1

gpgkey=file:///media/RPM-GPG-KEY-redhat-release

[LoadBalancer]

name=LoadBalancer

baseurl=file:///media/LoadBalancer

gpgcheck=1

gpgkey=file:///media/RPM-GPG-KEY-redhat-release

[ResilientStorage]

name=ResilientStorage

baseurl=file:///media/ResilientStorage

gpgcheck=1

gpgkey=file:///media/RPM-GPG-KEY-redhat-release

[ScalableFileSystem]

name=ScalableFileSystem

baseurl=file:///media/ScalableFileSystem

gpgcheck=1

gpgkey=file:///media/RPM-GPG-KEY-redhat-release

 

4、Openfiler iscsi存储配置

配置略,规划磁盘空间如下:

qdisk  256MB

data   30GB

 

5、Node1、Node2安装iscsi挂载存储

# yum install iscsi-initiator-utils -y

# chkconfig iscsid on  

# service iscsid start

# iscsiadm -m discovery -t sendtargets -p 192.168.52.110

Starting iscsid:                                          [  OK  ]

192.168.52.110:3260,1 iqn.2006-01.com.openfiler:tsn.raw

# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.raw -p 192.168.52.110–l

 

6、关闭iptables、selinux、NetworkManager服务

# service iptables stop

# chkconfig iptables off

# serviceNetworkManager stop

# chkconfigNetworkManager off

 

三、RHCS安装

1、Node1、Node2安装RHCS软件包

# yum -y install cman ricci gfs2-utils rgmanager lvm2-cluster

2、Node1、Node2更改各节点ricci密码

# passwd ricci

 

3、配置RHCS服务开机启动

# chkconfig ricci on

# chkconfig rgmanager on

# chkconfig cman on

# service ricci start

# service rgmanager start  

# service cman start

安装好RHCS,在启动cman的时候会提示如下错误:

Starting cman... xmlconfig cannot find/etc/cluster/cluster.conf    [Faile]

是由于cluster.conf文件没有创建,可以安装luci图形界面配置。

 

四、RHCS配置

1、安装luci图形管理界面(可以独立安装到某个管理节点,此次试验把luci安装在Node1)

# yum -y install luci

# service luci start

Start luci...                                             [  OK  ]

Point your web browser to https://node1.kbson.com:8084 (orequivalent) to access luci

 

2、访问图形管理界面,配置RHCS

https://192.168.52.10:8084


3、配置Cluster

Manag Clusters -> Create (其中password填写 ricci 用户的密码,同时建议将”Reboot Nodes Before Joining Cluster“推荐勾选上)

 

4、配置并调试Fence设备

1)Fence Devices à Add,添加Fence设备


2)Nodes àNode1、Node2(两个节点都要添加)à Add Fence Methodà Add Fence Instance


3)、调试fence设备

l  查看主机状态

fence_vmware_soap-a 192.168.52.254 -z -lroot -p kbsonlong -n node1 -o status

如遇如下错误,请使用uuid查询虚拟机状态,highavailability中fence设备也是通过uuid查找设备

Failed: Unabletoobtain correct plug status or plug is not available


l  查看UUID

fence_vmware_soap-a 192.168.52.254 -z -l root -p kbsonlong -o list


上图可以看到Esxi有三个UUID,分别是RHCS_node1、RHCS_node2、openfiler。

l  通过UUID查看主机状态

fence_vmware_soap -a192.168.52.254 -z -l root -p kbsonlong \ -U564d3735-f600-8365-68f4-5918090580fa-o status


如果返回Status: ON则表明fence设备正常。

5、FailOverDomains -> Add ,添加转移域

l  Prioritized:故障转移时选择优先级高

l  Restricted:服务只运行在指定的节点上

l  No Failback:故障节点恢复后,服务不用切换回去

 

6、Resources --> Add

l  虚拟IP:192.168.52.50

l  Apache脚本(你也可以自定义添加Script,或者里面有一个专门针对Apache的选项)

 

7、Service Groups --> Add,添加服务

l  依次在AddResource 添加IP,Scripts,并将Recovery Policy 属性设置为Relocate[转移]


8、配置GFS服务

1)        分别在node01,node02启动CLVM的集成cluster锁服务

lvmconf --enable-cluster 

chkconfig clvmd on

service clvmd start  

Activating VG(s):  No volumegroups found      [  OK  ]

2)        将挂载的operfile共享存储的30G在任意一节点进行分区

node01节点上:

# pvcreate /dev/sdc1

# pvs

# vgcreate gfsvg /dev/sdc1

# lvcreate -l +100%FREE -n data gfsvg

node02节点上:

# /etc/init.d/clvmd start

3)        格式化GFS文件系统

node01节点上:

# mkfs.gfs2 -p lock_dlm -t Cluster:gfs2-j 2 /dev/gfsvg/data

说明:

gfs:gfs2这个Cluster就是集群的名字,gfs2是定义的名字,相当于标签。  

-j是指定挂载这个文件系统的主机个数,不指定默认为1即为管理节点的。  

这里实验有两个节点

4)        挂载GFS文件系统

node01,node02 上创建GFS挂载点

# mkdir /vmdata

(1)node01,node02手动挂载测试,挂载成功后,创建文件测试集群文件系统情况。

# mount.gfs2 /dev/gfsvg/data /vmdata

(2)配置开机自动挂载

# vi /etc/fstab  

/dev/gfsvg/data  /vmdata gfs2defaults 0 0


9、配置表决盘Qdisk

#表决磁盘是共享磁盘,无需要太大,本例采用/dev/sdg 256MB来进行创建。

[root@node2 ~]# fdisk -l /dev/sdg

Disk /dev/sdg: 268 MB, 268435456 bytes

9 heads, 57 sectors/track, 1022 cylinders

Units = cylinders of 513 * 512 = 262656 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

1)        创建表决盘

# mkqdisk -c /dev/sdg-l myqdisk

mkqdiskv3.0.12.1

Writing newquorum disk label 'myqdisk' to /dev/sdg.

WARNING: Aboutto destroy all data on /dev/sdg; proceed [N/y] ? y

Initializingstatus block for node 1...

Initializingstatus block for node 2...

Initializingstatus block for node 3...

Initializingstatus block for node 4...

Initializingstatus block for node 5...

Initializingstatus block for node 6...

Initializingstatus block for node 7...

Initializingstatus block for node 8...

Initializingstatus block for node 9...

Initializingstatus block for node 10...

Initializingstatus block for node 11...

Initializingstatus block for node 12...

Initializingstatus block for node 13...

Initializingstatus block for node 14...

Initializingstatus block for node 15...

Initializingstatus block for node 16...

 

2)        查看表决盘信息

[root@node2 ~]#mkqdisk -L

mkqdiskv3.0.12.1

 

/dev/block/8:96:

/dev/disk/by-id/scsi-14f504e46494c45524a504a4b6e432d72636b492d6b44457a:

/dev/disk/by-path/ip-192.168.52.110:3260-iscsi-iqn.2006-01.com.openfiler:tsn.raw-lun-5:

/dev/sdg:

        Magic:                eb7a62c2

        Label:                RHCS_qdisk

        Created:              Tue Nov 24 00:24:29 2015

        Host:                 node1

        Kernel Sector Size:   512

        Recorded Sector Size: 512

3)        配置表决盘Qdisk

进入管理界面Manage Clusters -->  Cluster  --> Configure  -->  QDisk


主机名IP安装软件Node1192.168.52.10luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsiNode2192.168.52.11luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsiStorage192.168.52.110Openfiler共享存储Vip1192.168.52.224 Vip2192.168.52.250 
0 0