ceph 添加/删除OSD(ceph.conf)

来源:互联网 发布:生死狙击阿春解说矩阵 编辑:程序博客网 时间:2024/06/05 02:25

      之前有篇Blog记录的是通过ceph-deploy进行osd的添加以及删除操作,这里记录的是通过ceph源码安装后,通过配置ceph配置文件,而进行ceph的osd的添加以及删除操作。

操作环境

ceph 0.80.7
Centos 6.5 x86_64
Openstack Icehouse

现有ceph.conf配置如下
[global]    pid file                   = /var/run/ceph/$name.pid    auth cluster required      = cephx    auth service required      = cephx    auth client required       = cephx    keyring                  = /etc/ceph/keyring.admin[mon]    mon data                   = /mon    mon clock drift allowed    = .25[mon.alpha]    host                       = mon1     mon addr                   = 10.10.200.3:6789[mon.beta]    host                       = mon2    mon addr                   = 10.10.200.4:6789[mon.gamma]    host                       = mon3     mon addr                   = 10.10.200.10:6789[mds][osd]osd data                     = /osd/$nameosd mkfs type = xfs osd journal                  = /osd/$name/journalkeyring = /etc/ceph/keyring.$nameosd crush update on start = false[osd.0]    host                         = osd1    devs                         = /dev/vdb[osd.1]    host                         = osd2     devs                         = /dev/vdb[osd.2]    host                         = osd3     devs                         = /dev/vdb[osd.3]    host                         = osd4     devs                         = /dev/vdb[osd.4]    host                         = osd5    devs                         = /dev/vdb

操作步骤

OSD扩展

查看现有osd节点
# id    weight  type name       up/down reweight-1      5       root default-3      4               rack unknownrack-2      1                       host osd10       1                               osd.0   up      1-4      1                       host osd21       1                               osd.1   up      1-5      1                       host osd32       1                               osd.2   up      1-6      1                       host osd43       1                               osd.3   up      1-7      1               host osd54       1                       osd.4   up      1

例如希望添加新的osd节点osd.5,进行以下操作
1.添加osd.5信息至ceph.conf配置文件中
[osd.5]    host                         = osd6    devs                         = /dev/vdb

2.创建osd.5
[root@osd6 osd.5]# ceph osd create5

3.创建osd.5节点所需的数据目录
#mkdir /osd/osd.5#mkfs.xfs /dev/vdb#mount /dev/vdb /osd/osd.5

4.初始化osd数据目录
[root@osd6 osd]# ceph-osd -i 5 --mkfs --mkkey2014-10-27 15:44:46.529590 7f5e4997e7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway2014-10-27 15:44:46.688990 7f5e4997e7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway2014-10-27 15:44:46.691501 7f5e4997e7a0 -1 filestore(/osd/osd.5) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory2014-10-27 15:44:46.915155 7f5e4997e7a0 -1 created object store /osd/osd.5 journal /osd/osd.5/journal for osd.5 fsid 1cf40c50-7283-4653-9fe1-56a633df5d242014-10-27 15:44:46.915853 7f5e4997e7a0 -1 already have key in keyring /etc/ceph/keyring.osd.5

5.添加osd的auth信息
[root@osd6 osd]# ceph auth add osd.5 osd 'allow *' mon 'allow rwx' -i /etc/ceph/keyring.osd.5 added key for osd.5

6.添加osd.5至crush map中
[root@osd6 osd]# ceph osd crush add 5 1.0 root=default host=osd6add item id 5 name 'osd.5' weight 1 at location {host=osd6,root=default} to crush map

7.启动osd.5
[root@mon1 ~]# /etc/init.d/ceph -a start osd.5=== osd.5 === Mounting xfs on osd6:/osd/osd.5Starting Ceph osd.5 on osd6...starting osd.5 at :/0 osd_data /osd/osd.5 /osd/osd.5/journal

查看此时的osd节点信息
[root@mon1 ~]# ceph osd tree# id    weight  type name       up/down reweight-1      6       root default-3      4               rack unknownrack-2      1                       host osd10       1                               osd.0   up      1-4      1                       host osd21       1                               osd.1   up      1-5      1                       host osd32       1                               osd.2   up      1-6      1                       host osd43       1                               osd.3   up      1-7      1               host osd54       1                       osd.4   up      1-8      1               host osd65       1                       osd.5   up      1

OSD删除

查看现有osd节点
[root@mon1 ~]# ceph osd tree# id    weight  type name       up/down reweight-1      6       root default-3      4               rack unknownrack-2      1                       host osd10       1                               osd.0   up      1-4      1                       host osd21       1                               osd.1   up      1-5      1                       host osd32       1                               osd.2   up      1-6      1                       host osd43       1                               osd.3   up      1-7      1               host osd54       1                       osd.4   up      1-8      1               host osd65       1                       osd.5   up      1

例如要删除osd.5节点,通过以下操作进行
1.从crush map中删除osd.5
[root@mon1 ~]# ceph osd crush remove osd.5removed item id 5 name 'osd.5' from crush map

2.删除osd.5的auth信息
[root@mon1 ~]# ceph auth del osd.5updated

3.删除OSD
[root@mon1 ~]# ceph osd rm 5removed osd.5

4.从配置文件ceph.conf中将osd.5删除

查看osd节点信息
# id    weight  type name       up/down reweight-1      5       root default-3      4               rack unknownrack-2      1                       host osd10       1                               osd.0   up      1-4      1                       host osd21       1                               osd.1   up      1-5      1                       host osd32       1                               osd.2   up      1-6      1                       host osd43       1                               osd.3   up      1-7      1               host osd54       1                       osd.4   up      1



0 0