ceph - crush map 与 pool

来源:互联网 发布:c语言的秘密 汇编级 编辑:程序博客网 时间:2024/05/16 12:41

参考

openstack 与 ceph ( pool 管理 )

常见 crush map 管理方法

获得默认 crushmap (加密)    ceph osd getcrushmap -o crushmap.dump转换 crushmap 格式 (加密 -> 明文格式)    crushtool -d crushmap.dump -o crushmap.txt转换 crushmap 格式(明文 -> 加密格式)    crushtool -c crushmap.txt -o crushmap.done重新使用新 crushmap    ceph osd setcrushmap -i crushmap.done

划分不同的物理存储区间, 需要以 crush map 进行定义, 如下

1 对主机进行物理空间划分2 对主机组进行规则定义

1 物理主机划分

root default {    id -1           # do not change unnecessarily    # weight 264.000    alg straw    hash 0  # rjenkins1    item 240.30.128.33 weight 12.000    item 240.30.128.32 weight 12.000    item 240.30.128.215 weight 12.000    item 240.30.128.209 weight 12.000    item 240.30.128.213 weight 12.000    item 240.30.128.214 weight 12.000    item 240.30.128.212 weight 12.000    item 240.30.128.211 weight 12.000    item 240.30.128.210 weight 12.000    item 240.30.128.208 weight 12.000    item 240.30.128.207 weight 12.000    item 240.30.128.63 weight 12.000    item 240.30.128.34 weight 12.000    item 240.30.128.35 weight 12.000    item 240.30.128.36 weight 12.000    item 240.30.128.37 weight 12.000    item 240.30.128.39 weight 12.000    item 240.30.128.38 weight 12.000    item 240.30.128.58 weight 12.000    item 240.30.128.59 weight 12.000    item 240.30.128.60 weight 12.000    item 240.30.128.29 weight 12.000}root registry {    id -26    # weight 36.000    alg straw    item 240.30.128.206 weight 12.000    item 240.30.128.40 weight 12.000    item 240.30.128.30 weight 12.000}

说明

上面划分了两个物理区域1. root 区域, 包含了 264TB 空间2. registry 区域,  包含了 36TB 空间

需要注意的问题:

建议在存放数据前就对物理池进行规划, 否则会出现大量数据迁移现象, 或者会出现 osd full 现象

2. 规则划分

rule replicated_ruleset {    ruleset 0    type replicated    min_size 1    max_size 10    step take default    step chooseleaf firstn 0 type host    step emit}rule registry_ruleset {    ruleset 1    type replicated    min_size 2    max_size 3    step take registry    step chooseleaf firstn 0 type host    step emit}

pool 创建, 删除方法

创建

 ceph osd  pool  create  volumes 10240 10240 ceph osd  pool  create  paas 2048 2048

删除

ceph osd pool delete paas paas --yes-i-really-really-mean-it

查询

[root@hh-ceph-128215 ~]# ceph osd dump | grep replicapool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 6655 flags hashpspool stripe_width 0pool 1 'volumes' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 10240 pgp_num 10240 last_change 634 flags hashpspool stripe_width 0pool 4 'pppoe' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 512 pgp_num 512 last_change 7323 flags hashpspool stripe_width 0

注意:
replicated size = 副本数量
crush_ruleset = 对应物理池规则(crush map)

指定

ceph osd pool set paas crush_ruleset 1
0 0
原创粉丝点击