高可用集群管理工具的总结

来源:互联网 发布:中金 薪水 知乎 编辑:程序博客网 时间:2024/05/21 05:40

  • 概述
  • 实验部分
      • 1添加服务器
      • 2添加fence
      • 3添加服务级别
    • 添加存储

概述

  • 今天我们要说的就是在我们对于集群管理时,可使用的一个方便快捷的管理工具,他可以对我们的服务器进行完成调度,内存控制等一系列的功能
  • 准备工作
  • 三台虚拟机,在三台虚拟机里面,配置yum源如下
 [root@server1 ~]# cat /etc/yum.repos.d/rhel-source.repo [rhel-source]name=Red Hat Enterprise Linux $releasever - $basearch - Sourcebaseurl=http://172.25.60.250/rhel6.5enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[HighAvailability]name=Red Hat Enterprise Linux HighAvailabilitybaseurl=http://172.25.60.250/rhel6.5/HighAvailabilityenabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[LoadBalancer]name=Red Hat Enterprise Linux LoadBalancerbaseurl=http://172.25.60.250/rhel6.5/LoadBalancerenabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[ResilientStorage]name=Red Hat Enterprise Linux ResilientStoragebaseurl=http://172.25.60.250/rhel6.5/ResilientStorageenabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[ScalableFileSystem]name=Red Hat Enterprise Linux ScalableFileSystembaseurl=http://172.25.60.250/rhel6.5/ScalableFileSystemenabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  • 今天我们所做的实验在 redhat6.5 上完成,所以配置的yum源为6.5镜像,挂载采用的自己 apache 挂载在 /var/www/html/rhel6.5 下
  • 好了,完成上面的工作,我们可以进行今天的主题,首先,我们要用到的是 server1 server2 两台虚拟机, server3 作为存储装备在后面我们会用到

实验部分

  • server1配置
    -下载安装包
yum install -y luci   #安装ricci的图形化管理界面[root@server1 ~]# /etc/init.d/luci startAdding following auto-detected host IDs (IP addresses/domain names), corresponding to `server1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):    (none suitable found, you can still do it manually as mentioned above)Generating a 2048 bit RSA private keywriting new private key to '/var/lib/luci/certs/host.pem'Start luci...                                              [  OK  ]Point your web browser to https://server1:8084 (or equivalent) to access luci yum install -y ricci    #管理高可用软件[root@server1 ~]# cat /etc/redhat-release    #查看版本Red Hat Enterprise Linux Server release 6.5 (Santiago)[root@server1 ~]# echo westos | passwd --stdin ricci  #写入密码Changing password for user ricci.passwd: all authentication tokens updated successfully.[root@server1 ~]# /etc/init.d/ricci start   #启动服务Starting system message bus:                               [  OK  ]Starting oddjobd:                                         [  OK  ]generating SSL certificates...  doneGenerating NSS database...  doneStarting ricci:                                            [  OK  ][root@server1 ~]# chkconfig --list ricciricci           0:off   1:off   2:off   3:off   4:off   5:off   6:off[root@server1 ~]# chkconfig ricci on    #设置开机自启[root@server1 ~]# chkconfig --list ricciricci           0:off   1:off   2:on    3:on    4:on    5:on    6:off[root@server1 ~]# date   #时间同步Sat Sep 23 22:16:50 CST 2017[root@server1 ~]# cat /etc/hosts   #本地解析127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6172.25.60.1 server1172.25.60.2 server2172.25.60.3 server3172.25.60.4 server4172.25.60.5 server5
  • server2配置
yum install -y ricci[root@server2 yum.repos.d]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago)[root@server2 yum.repos.d]# echo westos | passwd --stdin ricciChanging password for user ricci.passwd: all authentication tokens updated successfully.[root@server2 yum.repos.d]# /etc/init.d/ricci startStarting system message bus:                               [  OK  ]Starting oddjobd:                                          [  OK  ]generating SSL certificates...  doneGenerating NSS database...  doneStarting ricci:                                            [  OK  ][root@server2 yum.repos.d]# chkconfig --list ricciricci           0:off   1:off   2:off   3:off   4:off   5:off   6:off[root@server2 yum.repos.d]# chkconfig ricci on[root@server2 yum.repos.d]# chkconfig --list ricciricci           0:off   1:off   2:on    3:on    4:on    5:on    6:off[root@server2 yum.repos.d]# dateSat Sep 23 22:16:45 CST 2017[root@server2 yum.repos.d]# cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6172.25.60.1 server1172.25.60.2 server2172.25.60.3 server3172.25.60.4 server4172.25.60.5 server5

1:添加服务器

1:获取证书
这里写图片描述
2:密码登陆
这里写图片描述
3:选择ok
这里写图片描述
4:选择管理员
这里写图片描述
5:创建所要连接的服务器
这里写图片描述
- server1

[root@server1 ~]# Broadcast message from root@server1    (unknown) at 23:40 ...The system is going down for reboot NOW!Connection to 172.25.60.1 closed by remote host.Connection to 172.25.60.1 closed.[kiosk@foundation60 Desktop]$ ssh root@172.25.60.1root@172.25.60.1's password: Last login: Sat Sep 23 23:25:01 2017 from 172.25.60.250[root@server1 ~]# /etc/init.d/luci start    #手动Start luci...                                              [  OK  ]Point your web browser to https://server1:8084 (or equivalent) to access luci
  • server2
 [root@server2 ~]# Broadcast message from root@server2    (unknown) at 23:39 ...The system is going down for reboot NOW!Connection to 172.25.60.2 closed by remote host.Connection to 172.25.60.2 closed.[root@server3 yum.repos.d]# ssh root@172.25.60.2root@172.25.60.2's password: Last login: Sat Sep 23 23:25:18 2017 from server3

添加成功
这里写图片描述
命令行查看

[root@server1 ~]# clustatCluster Status for westos_dou @ Sat Sep 23 23:50:27 2017Member Status: Quorate Member Name                             ID   Status ------ ----                             ---- ------ server1                                     1 Online, Local server2                                     2 Online[root@server1 ~]# cd /etc/cluster/[root@server1 cluster]# lscluster.conf  cman-notify.d[root@server1 cluster]# cat cluster.conf <?xml version="1.0"?><cluster config_version="1" name="westos_dou">    <clusternodes>        <clusternode name="server1" nodeid="1"/>        <clusternode name="server2" nodeid="2"/>    </clusternodes>    <cman expected_votes="1" two_node="1"/>    <fencedevices/>    <rm/></cluster>

2:添加fence

why ?
防止一个服务堵塞,替补以为他挂了,然后过会他又活了,造成脑裂
这里写图片描述

[root@server1 cluster]# cat cluster.conf <?xml version="1.0"?><cluster config_version="2" name="westos_dou">    <clusternodes>        <clusternode name="server1" nodeid="1"/>        <clusternode name="server2" nodeid="2"/>    </clusternodes>    <cman expected_votes="1" two_node="1"/>    <fencedevices>        <fencedevice agent="fence_xvm" name="vmfence"/>    </fencedevices></cluster>

真机安装

下载虚拟管理软件yum install -y fence-virtd-multicast.x86_64 fence-virtd-libvirt.x86_64 fence-virtd.x86_64[root@foundation60 ~]# rm -fr /etc/fence_virt.conf [root@foundation60 ~]# fence_virtd -cParsing of /etc/fence_virt.conf failed.Start from scratch [y/N]? yModule search path [/usr/lib64/fence-virt]: Available backends:    libvirt 0.1Available listeners:    multicast 1.2Listener modules are responsible for accepting requestsfrom fencing clients.Listener module [multicast]: The multicast listener module is designed for use environmentswhere the guests and hosts may communicate over a network usingmulticast.The multicast address is the address that a client will use tosend fencing requests to fence_virtd.Multicast IP Address [225.0.0.12]: Using ipv4 as family.Multicast IP Port [1229]: Setting a preferred interface causes fence_virtd to listen onlyon that interface.  Normally, it listens on all interfaces.In environments where the virtual machines are using the hostmachine as a gateway, this *must* be set (typically to virbr0).Set to 'none' for no interface.Interface [none]: br0The key file is the shared key information which is used toauthenticate fencing requests.  The contents of this file mustbe distributed to each physical host and virtual machine withina cluster.Key File [/etc/cluster/fence_xvm.key]: Backend modules are responsible for routing requests tothe appropriate hypervisor or management layer.Backend module [libvirt]: Configuration complete.=== Begin Configuration ===listeners {    multicast {        key_file = "/etc/cluster/fence_xvm.key";        interface = "br0";        port = "1229";        address = "225.0.0.12";        family = "ipv4";    }}fence_virtd {    backend = "libvirt";    listener = "multicast";    module_path = "/usr/lib64/fence-virt";}=== End Configuration ===Replace /etc/fence_virt.conf with the above [y/N]? y[root@foundation60 ~]# mkdir /etc/cluster/[root@foundation60 cluster]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1   #取随机数1+0 records in1+0 records out128 bytes (128 B) copied, 0.000152835 s, 838 kB/s[root@foundation60 cluster]# file /etc/cluster/fence_xvm.key /etc/cluster/fence_xvm.key: data[root@foundation60 cluster]# systemctl restart fence_virtd.service[root@foundation60 cluster]# netstat -anulp | grep :1229udp        0      0 0.0.0.0:1229            0.0.0.0:*                           11379/fence_virtd   scp fence_xvm.key root@172.25.60.1:/etc/cluster/scp fence_xvm.key root@172.25.60.2:/etc/cluster/

server1 & server2
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述

[root@server1 cluster]# cat cluster.conf <?xml version="1.0"?><cluster config_version="6" name="westos_dou">    <clusternodes>        <clusternode name="server1" nodeid="1">            <fence>                <method name="fence1">                    <device domain="2d2e2e67-3040-4c42-936c-42dfb6baf85e" name="vmfence"/>                </method>            </fence>        </clusternode>        <clusternode name="server2" nodeid="2">            <fence>                <method name="fence2">                    <device domain="5e78664f-7f5d-470b-913d-15b613a3f18c" name="vmfence"/>                </method>            </fence>        </clusternode>    </clusternodes>    <cman expected_votes="1" two_node="1"/>    <fencedevices>        <fencedevice agent="fence_xvm" name="vmfence"/>    </fencedevices></cluster>

fence_node server1 #让server1 挂掉(不复活)

[root@foundation60 cluster]# cat /etc/fence_virt.conf listeners {    multicast {        key_file = "/etc/cluster/fence_xvm.key";        interface = "br0";        port = "1229";        address = "225.0.0.12";        family = "ipv4";    }}fence_virtd {    backend = "libvirt";    listener = "multicast";    module_path = "/usr/lib64/fence-virt";}[root@foundation60 cluster]# brctl showbridge name bridge id       STP enabled interfacesbr0     8000.54ee756e4c14   no      enp3s0                            vnet0                            vnet1                            vnet2virbr0      8000.5254001bdfcc   yes     virbr0-nicvirbr1      8000.5254006adda7   yes     virbr1-nic

3:添加服务级别

这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
test

#1:写测试页[root@server1 html]# /etc/init.d/httpd stopStopping httpd:                                            [  OK  ][root@server1 html]# cat /var/www/html/index.html server1[root@server2 html]# /etc/init.d/httpd stopStopping httpd:                                            [  OK  ][root@server2 html]# cat /var/www/html/index.html server2[root@server1 html]# clustatCluster Status for westos_dou @ Sun Sep 24 00:50:12 2017Member Status: Quorate Member Name                             ID   Status ------ ----                             ---- ------ server1                                     1 Online, Local, rgmanager server2                                     2 Online, rgmanager Service Name                   Owner (Last)                   State          ------- ----                   ----- ------                   -----          service:apache                 server1                        started #2:关闭server1[root@server2 html]# clustatCluster Status for westos_dou @ Sun Sep 24 00:50:39 2017Member Status: Quorate Member Name                             ID   Status ------ ----                             ---- ------ server1                                     1 Online, rgmanager server2                                     2 Online, Local, rgmanager Service Name                   Owner (Last)                   State          ------- ----                   ----- ------                   -----          service:apache                 server1                        stopping  #用主机测试vip漂移 [root@foundation60 cluster]# curl 172.25.60.100server1[root@foundation60 cluster]# curl 172.25.60.100server2

添加存储

  • 打开server3
  • 制作共享磁盘
  • 这里写图片描述

这里写图片描述

yum install scsi-* -y[root@server3 ~]# cd /etc/tgt[root@server3 tgt]# lstargets.conf[root@server3 tgt]# vim targets.conf [root@server3 tgt]# /etc/init.d/tgtd startStarting SCSI target daemon:                               [  OK  ]

这里写图片描述

[root@server3 tgt]# tgt-admin -sTarget 1: iqn.2017-09.com.example:server.target1    System information:        Driver: iscsi        State: ready    I_T nexus information:    LUN information:        LUN: 0            Type: controller            SCSI ID: IET     00010000            SCSI SN: beaf10            Size: 0 MB, Block size: 1            Online: Yes            Removable media: No            Prevent removal: No            Readonly: No            Backing store type: null            Backing store path: None            Backing store flags:         LUN: 1            Type: disk            SCSI ID: IET     00010001            SCSI SN: beaf11            Size: 8590 MB, Block size: 512            Online: Yes            Removable media: No            Prevent removal: No            Readonly: No            Backing store type: rdwr            Backing store path: /dev/vdb            Backing store flags:     Account information:    ACL information:        172.25.60.1        172.25.60.2

server1 & server2

[root@server1 ~]# yum install -y iscsi-*[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.60.3Starting iscsid:                                           [  OK  ]172.25.60.3:3260,1 iqn.2017-09.com.example:server.target1[root@server1 ~]# iscsiadm -m node -lLogging in to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.60.3,3260] (multiple)Login to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.60.3,3260] successful.[root@server1 ~]# fdisk -lDisk /dev/vda: 21.5 GB, 21474836480 bytes16 heads, 63 sectors/track, 41610 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0008cb89   Device Boot      Start         End      Blocks   Id  System/dev/vda1   *           3        1018      512000   83  LinuxPartition 1 does not end on cylinder boundary./dev/vda2            1018       41611    20458496   8e  Linux LVMPartition 2 does not end on cylinder boundary.Disk /dev/mapper/VolGroup-lv_root: 19.9 GB, 19906166784 bytes255 heads, 63 sectors/track, 2420 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/VolGroup-lv_swap: 1040 MB, 1040187392 bytes255 heads, 63 sectors/track, 126 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/sda: 8589 MB, 8589934592 bytes64 heads, 32 sectors/track, 8192 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000[root@server1 ~]# fdisk -cu /dev/sda#磁盘格式为8e为了可扩充Disk /dev/sda: 8589 MB, 8589934592 bytes64 heads, 32 sectors/track, 8192 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x406efa26   Device Boot      Start         End      Blocks   Id  System/dev/sda1               2        8192     8387584   8e  Linux LVM[root@server1 ~]# cat /proc/partitions major minor  #blocks  name 252        0   20971520 vda 252        1     512000 vda1 252        2   20458496 vda2 253        0   19439616 dm-0 253        1    1015808 dm-1   8        0    8388608 sda   8        1    8387584 sda1

server2

[root@server2 html]# cat /proc/partitions major minor  #blocks  name 252        0   20971520 vda 252        1     512000 vda1 252        2   20458496 vda2 253        0   19439616 dm-0 253        1    1015808 dm-1   8        0    8388608 sda[root@server2 html]# partprobe Warning: WARNING: the kernel failed to re-read the partition table on /dev/vda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.[root@server2 html]# cat /proc/partitions major minor  #blocks  name 252        0   20971520 vda 252        1     512000 vda1 252        2   20458496 vda2 253        0   19439616 dm-0 253        1    1015808 dm-1   8        0    8388608 sda   8        1    8387584 sda1

制造可扩充磁盘,每次制造pv vg lv 都要在server2同步一次

[root@server1 ~]# pvcreate /dev/sda1  dev_is_mpath: failed to get device for 8:1  Physical volume "/dev/sda1" successfully created[root@server1 ~]# vgcreate clustervg /dev/sda1  Clustered volume group "clustervg" successfully created[root@server1 html]# lvcreate -L 2G -n demo clustervg  Logical volume "demo" created[root@server1 ~]# mkfs.ext4 /dev/clustervg/demo mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks131072 inodes, 524288 blocks26214 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=53687091216 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:     32768, 98304, 163840, 229376, 294912Writing inode tables: done                            Creating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 25 mounts or180 days, whichever comes first.  Use tune2fs -c or -i to override.[root@server1 ~]# lvextend -L +2G /dev/clustervg/demo   Extending logical volume demo to 4.00 GiB  Logical volume demo successfully resized[root@server1 ~]# lvs  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert  lv_root VolGroup  -wi-ao----  18.54g                                               lv_swap VolGroup  -wi-ao---- 992.00m                                               demo    clustervg -wi-a-----   4.00g                                             [root@server1 ~]# resize2fs /dev/clustervg/demo resize2fs 1.41.12 (17-May-2010)Resizing the filesystem on /dev/clustervg/demo to 1048576 (4k) blocks.The filesystem on /dev/clustervg/demo is now 1048576 blocks long.root@server1 ~]# vim /etc/lvm/lvm.conf

这里写图片描述

[root@server2 html]# pvs  PV         VG       Fmt  Attr PSize  PFree  /dev/vda2  VolGroup lvm2 a--  19.51g    0 [root@server2 html]# pvs  PV         VG       Fmt  Attr PSize  PFree  /dev/sda1           lvm2 a--   8.00g 8.00g  /dev/vda2  VolGroup lvm2 a--  19.51g    0 [root@server2 html]# vgs  VG        #PV #LV #SN Attr   VSize  VFree  VolGroup    1   2   0 wz--n- 19.51g    0   clustervg   1   0   0 wz--nc  8.00g 8.00g[root@server2 html]# lvs  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert  lv_root VolGroup  -wi-ao----  18.54g                                               lv_swap VolGroup  -wi-ao---- 992.00m                                               demo    clustervg -wi-a-----   2.00g            
原创粉丝点击