使用 Ceph-deploy 快速部署 Ceph 环境

来源:互联网 发布:网络打鱼赌钱游戏微信 编辑:程序博客网 时间:2024/05/20 14:18

Ceph install by ceph-deploy

基于官方社区安装手顺:
Ceph 快速安装

本地环境信息:

本手顺安装架构:    Ceph-deploy 1个    MON 1个    OSD 2个CentOS 7:    ceph-deploy + monitor(ceph1)        192.168.122.18        172.16.34.253    osd(ceph2)        192.168.122.38        172.16.34.184    osd(ceph3)        192.168.122.158         172.16.34.116

所有节点共通安装配置

设定网络代理(非必须)

通常情况下访问国外网站速度都会很慢,推荐设定代理。vim /etc/environmentexport http_proxy=xxxexport https_proxy=xxx

修改主机名和 /etc/hosts

每个节点修改主机名:hostnamectl set-hostname ceph1hostnamectl set-hostname ceph2hostnamectl set-hostname ceph3每个节点修改配置文件:vim /etc/hosts192.168.122.18 ceph1192.168.122.38 ceph2192.168.122.158 ceph3每个节点确认连通性:ping -c 3 ceph1ping -c 3 ceph2ping -c 3 ceph3

设定网卡开机启动

grep ONBOOT /etc/sysconfig/network-scripts/ifcfg-xxxONBOOT=yes

添加 epel,更新 rpm

添加 Epel 源:sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*添加 Ceph 源(这段配置可以不配):sudo vim /etc/yum.repos.d/ceph.repo[ceph-noarch]name=Ceph noarch packagesbaseurl=http://download.ceph.com/rpm-luminous/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc更新软件包:sudo yum update -y

关闭防火墙和 SELINUX

关闭防火墙(最简单)或者自己设定 iptables 规则systemctl status firewalld.servicesystemctl stop firewalld.servicesystemctl disable firewalld.service设定 iptables 规则如下:http://docs.ceph.org.cn/start/quick-start-preflight/#id7禁用 SELINUX:修改配置文件(重启生效)+ 手动设定(立即生效)sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/configsetenforce 0配置修改的确认:grep SELINUX= /etc/selinux/configgetenforce

上述操作结束推荐重启机器(所有节点)

安装和配置 NTP

sudo yum install ntp ntpdate ntp-doc -ysystemctl restart ntpdsystemctl status ntpdTODO: 测试使用只配置了一个 MON 节点,这边并没有配置 ntp 服务器。官方推荐的是集群的所有节点全部安装并配置 NTP。

安装和配置 SSH

sudo yum install openssh-server -y通常情况下 Linux 发行版会自带 ssh 并启动该服务systemctl status sshd

ceph-deploy 节点安装

安装 ceph-deploy

sudo yum install ceph-deploy -y

配置 SSH(生成公秘钥实现免密访问)

官方推荐的是新建一个非 ceph 用户,这边简单起见直接使用了 root 用户。
创建部署 CEPH 的用户

生成公秘钥文件,并拷贝公钥到各个节点ssh-keygenssh-copy-id root@ceph1ssh-copy-id root@ceph2ssh-copy-id root@ceph3验证免密访问ssh ceph1exitssh ceph2exitssh ceph3exit修改配置文件vim /root/.ssh/configHost ceph1   Hostname ceph1   User rootHost ceph2   Hostname ceph2   User rootHost ceph3   Hostname ceph3   User root

安装存储集群

创建安装目录mkdir -p /root/my_clustercd /root/my_cluster创建集群ceph-deploy new ceph1修改配置文件vim ceph.confosd pool default size = 2public network = 192.168.122.18/24★ 注意 mon_host 必须在 public network 网络的网段内!!!安装 Ceph    如果在网络速度很慢的情况下,推荐自己手动在各个节点安装,最后在运行这个命令。    否则这边会不断报错,挺蛋疼的。参:[ceph-deploy 安装过程分析] 进行每个节点的安装。ceph-deploy install ceph1 ceph2 ceph3初始化 monitorceph-deploy mon create-initial

以上 Ceph 集群创建完成, monitor 正常启动完成,接下来就是配置 OSD 了

monitor 启动确认

进程确认    [root@ceph1 my_cluster]# ps -ef | grep ceph    root     21688 16109  0 08:50 pts/0    00:00:00 grep --color=auto ceph    ceph     29366     1  0 May17 ?        00:00:13 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup cephsystemctl 确认    [root@ceph1 my_cluster]# systemctl status | grep ceph    ● ceph1               │     └─21690 grep --color=auto ceph                 ├─system-ceph\x2dmon.slice                 │ └─ceph-mon@ceph1.service                 │   └─29366 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph    [root@ceph1 my_cluster]# systemctl status ceph-mon@ceph1.service    ● ceph-mon@ceph1.service - Ceph cluster monitor daemon       Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)       Active: active (running) since Wed 2017-05-17 16:50:48 CST; 16h ago     Main PID: 29366 (ceph-mon)       CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph1.service               └─29366 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph    May 17 16:50:48 ceph1 systemd[1]: Started Ceph cluster monitor daemon.    May 17 16:50:48 ceph1 systemd[1]: Starting Ceph cluster monitor daemon...    May 17 16:50:48 ceph1 ceph-mon[29366]: starting mon.ceph1 rank 0 at 192.168.122.18:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph1 fsid 92da5066-e973-4e7e-8524-8dcbc948c93b

ceph-deploy 安装过程分析

connected to hostinstalling Ceph    yum clean all    yum -y install epel-release    yum -y install yum-plugin-priorities    rpm --import https://download.ceph.com/keys/release.asc    rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm    yum -y install ceph ceph-radosgw每个节点的安装直接从 yum -y install epel-release 开始到最后

添加 OSD 之一(使用两个裸盘)

ceph-deploy disk zap ceph2:/dev/vdb ceph3:/dev/vdbceph-deploy osd prepare ceph2:/dev/vdb ceph3:/dev/vdb★ 注意:[prepare 命令只准备 OSD 。在大多数操作系统中,硬盘分区创建后,不用 activate 命令也会自动执行 activate 阶段(通过 Ceph 的 udev 规则)。] 摘自:http://docs.ceph.org.cn/rados/deployment/ceph-deploy-osd/所以下面的 activate 会报错,实际上 OSD 已经激活,这个时候推荐的做法是在 OSD 节点手动去看服务有没有启动。参:[OSD 启动确认(和 monitor 启动确认类似)]ceph-deploy osd activate ceph2:/dev/vdb ceph3:/dev/vdb推送配置文件ceph-deploy admin ceph1 ceph2 ceph3ceph health

添加 OSD 之二(使用两个文件夹)

ssh ceph2sudo mkdir /var/local/osd0exitssh ceph3sudo mkdir /var/local/osd1exitceph-deploy osd prepare ceph2:/var/local/osd0 ceph3:/var/local/osd1ceph-deploy osd activate ceph2:/var/local/osd0 ceph3:/var/local/osd1ceph-deploy admin ceph1 ceph2 ceph3ceph health

OSD 启动确认(和 monitor 启动确认类似)

[root@ceph2 ~]# ps -ef | grep cephroot     15818 15802  0 09:17 pts/0    00:00:00 grep --color=auto cephceph     24426     1  0 May17 ?        00:00:33 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph[root@ceph2 ~]# systemctl status | grep ceph● ceph2           │     └─15822 grep --color=auto ceph             ├─system-ceph\x2dosd.slice             │ └─ceph-osd@0.service             │   └─24426 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph[root@ceph2 ~]# systemctl status ceph-osd@0.service● ceph-osd@0.service - Ceph object storage daemon   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled)   Active: active (running) since Wed 2017-05-17 16:56:54 CST; 16h ago Main PID: 24426 (ceph-osd)   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service           └─24426 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup cephMay 17 16:56:53 ceph2 systemd[1]: Starting Ceph object storage daemon...May 17 16:56:53 ceph2 ceph-osd-prestart.sh[24375]: create-or-move updating item name 'osd.0' weight 0.0146 at location {host=ceph2,root=default} to crush mapMay 17 16:56:54 ceph2 systemd[1]: Started Ceph object storage daemon.May 17 16:56:54 ceph2 ceph-osd[24426]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journalMay 17 16:56:54 ceph2 ceph-osd[24426]: 2017-05-17 16:56:54.080778 7f0727d3a800 -1 osd.0 0 log_to_monitors {default=true}

数据清除

清除安装包ceph-deploy purge ceph1 ceph2 ceph3清除配置信息ceph-deploy purgedata ceph1 ceph2 ceph3ceph-deploy forgetkeys每个节点删除残留的配置文件rm -rf /var/lib/ceph/osd/*rm -rf /var/lib/ceph/mon/*rm -rf /var/lib/ceph/mds/*rm -rf /var/lib/ceph/bootstrap-mds/*rm -rf /var/lib/ceph/bootstrap-osd/*rm -rf /var/lib/ceph/bootstrap-mon/*rm -rf /var/lib/ceph/tmp/*rm -rf /etc/ceph/*rm -rf /var/run/ceph/*

  • Ceph install by ceph-deploy
    • 所有节点共通安装配置
      • 设定网络代理非必须
      • 修改主机名和 etchosts
      • 设定网卡开机启动
      • 添加 epel更新 rpm
      • 关闭防火墙和 SELINUX
      • 上述操作结束推荐重启机器所有节点
      • 安装和配置 NTP
      • 安装和配置 SSH
    • ceph-deploy 节点安装
      • 安装 ceph-deploy
      • 配置 SSH生成公秘钥实现免密访问
      • 安装存储集群
        • monitor 启动确认
        • ceph-deploy 安装过程分析
      • 添加 OSD 之一使用两个裸盘
      • 添加 OSD 之二使用两个文件夹
        • OSD 启动确认和 monitor 启动确认类似
    • 数据清除