RHCS集群套件
来源:互联网 发布:linux schedule 编辑:程序博客网 时间:2024/05/29 07:21
RHCS:
1.du&dh du:统计文件系统,用户层面(文加目录|链接等) df:读磁盘块2.读取大小相同的一个大文件和多个小文件所需要的时间哪个快3.ospf|lvs(4层)4.kvm|qemu|libvirtd 工具的区别和差异 kvm:CPU|内存 qemu:IO libvirtd:虚拟化管理器5.ls -S 按照大小排序6.怎么看内存使用量:free -m(解释参数|buff:写缓存|cache:读缓存 )
高可用
作用:解决单点故障
搭建RHCS|Pacemaker
1.RHCS
http://wzlinux.blog.51cto.com/8021085/1725373conga->luci(manager) web https://ip:8084 | ricci:1111(agent)
配置yum源
[root@server1 ~]# cat /etc/yum.repos.d/rhel-source.repo [Server]name=Red Hat Enterprise Linux Serverbaseurl=http://172.25.66.250/rhel6.5gpgcheck=0[HighAvailability]name=Red Hat Enterprise Linux HighAvailabilitybaseurl=http://172.25.66.250/rhel6.5/HighAvailabilitygpgcheck=0[LoadBalancer]name=Red Hat Enterprise Linux LoadBalancerbaseurl=http://172.25.66.250/rhel6.5/LoadBalancergpgcheck=0[ResilientStorage]name=Red Hat Enterprise Linux ResilientStoragebaseurl=http://172.25.66.250/rhel6.5/ResilientStoragegpgcheck=0[ScalableFileSystem]name=Red Hat Enterprise Linux ScalableFileSystembaseurl=http://172.25.66.250/rhel6.5/ScalableFileSystemgpgcheck=0[root@server1 ~]#***scp到server2
server1|server2同步
安装ricci|设置开机自启动|初始化密码[root@server1 ~]# yum install ricci -y[root@server1 ~]# chkconfig ricci --listricci 0:off 1:off 2:off 3:off 4:off 5:off 6:off[root@server1 ~]# chkconfig ricci on #设置开机自起[root@server1 ~]# chkconfig ricci --listricci 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@server1 ~]# [root@server1 ~]# echo westos | passwd --stdin ricci #初始化密码Changing password for user ricci.passwd: all authentication tokens updated successfully.[root@server1 ~]# /etc/init.d/ricci startStarting system message bus: [ OK ]Starting oddjobd: [ OK ]generating SSL certificates... doneGenerating NSS database... doneStarting ricci: [ OK ][root@server1 ~]#
server1|server2随便一个节点安装luci
[root@server1 ~]# yum install luci -y[root@server1 ~]# /etc/init.d/luci startAdding following auto-detected host IDs (IP addresses/domain names), corresponding to `server1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci): (none suitable found, you can still do it manually as mentioned above)Generating a 2048 bit RSA private keywriting new private key to '/var/lib/luci/certs/host.pem'Start luci... [ OK ]Point your web browser to https://server1:8084 (or equivalent) to access luci[root@server1 ~]# chkconfig --list luciluci 0:off 1:off 2:off 3:off 4:off 5:off 6:off[root@server1 ~]# chkconfig luci on[root@server1 ~]# chkconfig --list luciluci 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@server1 ~]# ****浏览器添加集群服务(Node) https://server1:8084(此时需要有本地解析) 参考图|或者文档6.5
查看集群信息
[root@server1 ~]# clustat #安装完成之后,才有此命令Cluster Status for lucci @ Sat Sep 23 10:43:53 2017Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1 1 Online, Local server2 2 Online[root@server1 ~]#
Fence Device
emc存储|isasi|nfsNFS|Fence|IOEFence:主要解决脑裂#物理机上配置fence[root@foundation66 ~]# rpm -qa|grep fencefence-virtd-multicast-0.3.2-2.el7.x86_64fence-virtd-libvirt-0.3.2-2.el7.x86_64fence-virtd-0.3.2-2.el7.x86_64[root@foundation66 ~]# ll -d /etc/cluster/drwxr-xr-x. 2 root root 26 May 23 14:58 /etc/cluster/[root@foundation66 ~]# fence_virtd -c[root@foundation66 ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=11+0 records in1+0 records out128 bytes (128 B) copied, 0.00142596 s, 89.8 kB/s[root@foundation66 ~]# file /etc/cluster/fence_xvm.key /etc/cluster/fence_xvm.key: data[root@foundation66 ~]# systemctl restart fence_virtd.service [root@foundation66 ~]# netstat -anulp | grep :1229udp 0 0 0.0.0.0:1229 0.0.0.0:* 14721/fence_virtd [root@foundation66 ~]# scp /etc/cluster/fence_xvm.key root@172.25.66.1:/etc/cluster/[root@foundation66 ~]# scp /etc/cluster/fence_xvm.key root@172.25.66.2:/etc/cluster/[root@foundation66 ~]# *****图形给server1|server2添加fence 先添加fence device 然后给server1|server2添加fence
虚拟机
[root@server1 cluster]# cat /etc/cluster/cluster.conf #查看fence信息
[root@server1 ~]# fence_node server2 #踢掉server2fence server2 success[root@server1 ~]# *****Failover Domains|Resources|Service group
测试
关闭掉两端httpd|集群生效后,自动启动 **会自动添加VIP,自动开启httpd(根据在apache里面定义的资源顺序,决定哪个优先)[root@server1 ~]# clustat Cluster Status for lucci @ Sat Sep 23 14:09:28 2017Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1 1 Online, Local, rgmanager server2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:Apache server1 started [root@server1 ~]# /etc/init.d/httpd stopStopping httpd: [ OK ][root@server1 ~]# clustat Cluster Status for lucci @ Sat Sep 23 14:11:18 2017Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1 1 Online, Local, rgmanager server2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:Apache server2 started [root@server2 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:61:92:98 brd ff:ff:ff:ff:ff:ff inet 172.25.66.2/24 brd 172.25.66.255 scope global eth1 inet 172.25.66.100/24 scope global secondary eth1 inet6 fe80::5054:ff:fe61:9298/64 scope link valid_lft forever preferred_lft forever[root@server2 ~]# ***当其中一个宕机之后,会发生VIP和服务自动迁移,但是需要一点点时间
添加共享存储
#开启Server3,添加虚拟磁盘***服务端[root@server3 ~]# fdisk -l[root@server3 ~]# yum install scsi-* -y[root@server3 tgt]# vim /etc/tgt/targets.conf ... 38 <target iqn.2017-09.com.example:server.target1> 39 backing-store /dev/vdb 40 initiator-address 172.25.66.1 41 initiator-address 172.25.66.2 42 </target>...[root@server3 tgt]# /etc/init.d/tgtd startStarting SCSI target daemon: [ OK ][root@server3 tgt]# tgt-admin -s..... Readonly: No Backing store type: rdwr Backing store path: /dev/vdb Backing store flags: Account information: ACL information: 172.25.66.1 172.25.66.2.....
客户端
#Server1|Server2上[root@server1 ~]# yum install iscsi-* -y[root@server1 ~]# /etc/init.d/iscsi start[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.66.3Starting iscsid: [ OK ]172.25.66.3:3260,1 iqn.2017-09.com.example:server.target1[root@server1 ~]# iscsiadm -m node -lLogging in to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.66.3,3260] (multiple)Login to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.66.3,3260] successful.[root@server1 ~]# fdisk -l*如果导入错误重新修改之后,需要重启客户端iscsiext4本地文件系统,不支持集群化,节点不同步
***LV集群化
[root@server2 ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created[root@server2 ~]# vgcreate md0 /dev/sda Clustered volume group "md0" successfully created[root@server2 ~]# lvcreate -L 2G -n lv0 md0 Logical volume "lv0" created[root@server2 ~]# lvextend -L +2G /dev/md0/lv0 Extending logical volume lv0 to 4.00 GiB Logical volume lv0 successfully resized[root@server2 ~]#
**查看同步
[root@server1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 VolGroup lvm2 a-- 8.51g 0 [root@server1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda md0 lvm2 a-- 8.00g 8.00g /dev/vda2 VolGroup lvm2 a-- 8.51g 0 [root@server1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda md0 lvm2 a-- 8.00g 6.00g /dev/vda2 VolGroup lvm2 a-- 8.51g 0 [root@server1 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 7.61g lv_swap VolGroup -wi-ao---- 920.00m lv0 md0 -wi-a----- 2.00g [root@server1 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 7.61g lv_swap VolGroup -wi-ao---- 920.00m lv0 md0 -wi-a----- 4.00g [root@server1 ~]#
*本地文件系统
#两边同时格式化,然后挂载查看[root@server2 ~]# mkfs.ext4 /dev/md0/lv0
resource添加共享资源
先禁用apache---添加虚拟IP--添加共享存储--添加httpa[root@server1 ~]# clusvcadm -d ApacheLocal machine disabling service:Apache...Success[root@server1 ~]# clusvcadm -e ApacheLocal machine trying to enable service:Apache...Successservice:Apache is now running on server1[root@server1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup-lv_root 7853764 1092548 6362268 15% /tmpfs 510200 25656 484544 6% /dev/shm/dev/vda1 495844 33469 436775 8% /boot/dev/mapper/md0-lv0 4128448 139256 3779480 4% /var/www/html[root@server1 ~]# vim /var/www/html/index.html[root@server1 ~]# clustat Cluster Status for lucci @ Sat Sep 23 16:31:14 2017Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1 1 Online, Local, rgmanager server2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:Apache server1 started [root@server1 ~]# ifdown eth0
**共享存储实现
[root@server2 ~]# clustat Cluster Status for lucci @ Sat Sep 23 16:31:43 2017Member Status: Quorate Member Name ID Status ------ ---- ---- ------ server1 1 Offline server2 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:Apache server2 starting [root@server2 ~]#
***gfs2集群文件系统
先移除本地文件系统(先在组里面remove,然后再移除)[root@server1 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t lucci:mygfs2 /dev/md0/lv0 This will destroy any data on /dev/md0/lv0.It appears to contain: symbolic link to `../dm-2'Are you sure you want to proceed? [y/n] yDevice: /dev/md0/lv0Blocksize: 4096Device Size 4.00 GB (1048576 blocks)Filesystem Size: 4.00 GB (1048575 blocks)Journals: 3Resource Groups: 16Locking Protocol: "lock_dlm"Lock Table: "lucci:mygfs2"UUID: 69d19ccc-1022-1c13-6c10-852a0e46a1a7[root@server1 ~]# mount /dev/md0/lv0 /mnt/[root@server1 mnt]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup-lv_root 7853764 1090504 6364312 15% /tmpfs 510200 31816 478384 7% /dev/shm/dev/vda1 495844 33469 436775 8% /boot/dev/mapper/md0-lv0 4193856 397148 3796708 10% /mnt
*正常关闭集群
leave clustatdelete 会删除配置文件并且屏蔽掉开机自启动项iscsiadm -m node -uiscsadm -m node -o delete删除配置文件
—–看完之后,对RHCS集群套件大概有个初步的了解了吧!—–
阅读全文
0 0
- RHCS集群套件
- RHCS红帽集群套件
- 红帽集群套件RHCS iSCSI GFS实现iscsi集群
- 红帽集群套件RHCS四部曲(维护篇)
- 红帽集群套件RHCS四部曲(概念篇)
- 红帽集群套件RHCS四部曲(实战篇)
- 红帽集群套件RHCS四部曲(测试篇)
- 红帽集群套件RHCS四部曲(维护篇)
- 红帽集群套件RHCS四部曲(概念篇)
- postgresql使用RHCS套件搭建HA高可用集群
- RHCS套件web服务集群管理ricci luci
- RHCS集群套件——Luci/Ricci实现Web高可用集群
- linux学习之使用RHCS套件搭建HA高可用集群
- Linux学习之使用RHCS套件搭建HA高可用集群
- RHCS集群安装部署
- 6、RedHat6 集群RHCS
- 集群之rhcs
- 集群之RHCS
- 面向协议与面向对象的区别
- 运行时如何给java对象动态的属性赋值
- BZOJ3207+2802+2318+1933+1934题解 12.11
- 个人规划(一)
- Windows套接字I/O模型(2) -- Select模型
- RHCS集群套件
- HttpServletRequest介绍
- 波和振动动的关系与时域和频域的关系一致吗
- go语言变量声明后的默认值
- Docker入门学习(4)----Dockerfile制作第一个镜像和容器中的第一个javaweb应用
- 惨!美团程序员的年终奖金可能没了
- 运输层TCP协议总结
- 程序员的痛点!程序员老婆:不怕,老公是写代码的,忙得很!
- JS学习篇-设置标记条件解决无限获取焦点问题