Ceph CRUSH调整实例:在同一个主机中存在两种类型磁盘

来源:互联网 发布:u盘数据恢复破解版 编辑:程序博客网 时间:2024/04/29 22:24

说明:本文档针对在同一个主机中,存在两种磁盘的OSD:SSD盘和SATA盘,使用CRUSH进行分层的测试。
以下测试环境均为Ceph 0.94.x。

测试环境:

ceph-mon节点1ceph-osd110G*2  20G*2ceph-osd210G*2  20G*2ceph-osd310G*1  20G*1假设10G为SSD,20G为SATAceph-osd1:/dev/sdb1                5.0G   34M  5.0G   1% /var/lib/ceph/osd/ceph-0/dev/sdc1                5.0G   34M  5.0G   1% /var/lib/ceph/osd/ceph-1/dev/sdd1                 15G   35M   15G   1% /var/lib/ceph/osd/ceph-2/dev/sde1                 15G   34M   15G   1% /var/lib/ceph/osd/ceph-3ceph-osd2:/dev/sdb1                5.0G   34M  5.0G   1% /var/lib/ceph/osd/ceph-4/dev/sdc1                 15G   34M   15G   1% /var/lib/ceph/osd/ceph-5/dev/sdd1                 15G   34M   15G   1% /var/lib/ceph/osd/ceph-6/dev/sde1                5.0G   34M  5.0G   1% /var/lib/ceph/osd/ceph-7ceph-osd3:/dev/sdb1                5.0G   34M  5.0G   1% /var/lib/ceph/osd/ceph-8/dev/sdc1                 15G   34M   15G   1% /var/lib/ceph/osd/ceph-9$ ceph osd treeID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.04997 root default                                         -2 0.01999     host ceph-osd1                                    0       0         osd.0           up  1.00000          1.00000  1       0         osd.1           up  1.00000          1.00000  2 0.00999         osd.2           up  1.00000          1.00000  3 0.00999         osd.3           up  1.00000          1.00000 -3 0.01999     host ceph-osd2                                    4       0         osd.4           up  1.00000          1.00000  5 0.00999         osd.5           up  1.00000          1.00000  6 0.00999         osd.6           up  1.00000          1.00000  7       0         osd.7           up  1.00000          1.00000 -4 0.00999     host ceph-osd3                                    8       0         osd.8           up  1.00000          1.00000  9 0.00999         osd.9           up  1.00000          1.00000 

操作:

导出crush map

$ ceph osd getcrushmap -o crushmap.map

将map转为可读模式

crushtool -d crushmap.map -o crushmap.txt

原先cursh map:

# begin crush maptunable choose_local_tries 0tunable choose_local_fallback_tries 0tunable choose_total_tries 50tunable chooseleaf_descend_once 1tunable straw_calc_version 1# devicesdevice 0 osd.0device 1 osd.1device 2 osd.2device 3 osd.3device 4 osd.4device 5 osd.5device 6 osd.6device 7 osd.7device 8 osd.8device 9 osd.9# typestype 0 osdtype 1 hosttype 2 chassistype 3 racktype 4 rowtype 5 pdutype 6 podtype 7 roomtype 8 datacentertype 9 regiontype 10 root# bucketshost ceph-osd1 {    id -2        # do not change unnecessarily    # weight 0.020    alg straw    hash 0    # rjenkins1    item osd.0 weight 0.000    item osd.1 weight 0.000    item osd.2 weight 0.010    item osd.3 weight 0.010}host ceph-osd2 {    id -3        # do not change unnecessarily    # weight 0.020    alg straw    hash 0    # rjenkins1    item osd.4 weight 0.000    item osd.5 weight 0.010    item osd.6 weight 0.010    item osd.7 weight 0.000}host ceph-osd3 {    id -4        # do not change unnecessarily    # weight 0.010    alg straw    hash 0    # rjenkins1    item osd.8 weight 0.000    item osd.9 weight 0.010}root default {    id -1        # do not change unnecessarily    # weight 0.050    alg straw    hash 0    # rjenkins1    item ceph-osd1 weight 0.020    item ceph-osd2 weight 0.020    item ceph-osd3 weight 0.010}# rulesrule replicated_ruleset {    ruleset 0    type replicated    min_size 1    max_size 10    step take default    step chooseleaf firstn 0 type host    step emit}# end crush map

编辑后:
说明:增加了一个介于osd和host之间的type,将一个主机上的资源分隔为两组。

# begin crush maptunable choose_local_tries 0tunable choose_local_fallback_tries 0tunable choose_total_tries 50tunable chooseleaf_descend_once 1tunable straw_calc_version 1# devicesdevice 0 osd.0device 1 osd.1device 2 osd.2device 3 osd.3device 4 osd.4device 5 osd.5device 6 osd.6device 7 osd.7device 8 osd.8device 9 osd.9# typestype 0 osdtype 1 diskarraytype 2 hosttype 3 chassistype 4 racktype 5 rowtype 6 pdutype 7 podtype 8 roomtype 9 datacentertype 10 regiontype 11 root# bucketsdiskarray ceph-osd1-ssd {    id -1    alg straw    hash 0    item osd.0 weight 0.005    item osd.1 weight 0.005}diskarray ceph-osd1-sata {    id -2    alg straw    hash 0    item osd.2 weight 0.015    item osd.3 weight 0.015}diskarray ceph-osd2-ssd {    id -3    alg straw    hash 0    item osd.4 weight 0.005    item osd.7 weight 0.005}diskarray ceph-osd2-sata {    id -4    alg straw    hash 0    item osd.5 weight 0.015    item osd.6 weight 0.015}diskarray ceph-osd3-ssd {    id -5    alg straw    hash 0    item osd.8 weight 0.005}diskarray ceph-osd3-sata {    id -6    alg straw    hash 0    item osd.9 weight 0.015}    root ssd {    id -7    alg straw    hash 0    item ceph-osd1-ssd weight 0.010    item ceph-osd2-ssd weight 0.010    item ceph-osd3-ssd weight 0.005}root sata {    id -8    alg straw    hash 0    item ceph-osd1-sata weight 0.030    item ceph-osd2-sata weight 0.030    item ceph-osd3-sata weight 0.015}# rulesrule ssd_ruleset {    ruleset 0    type replicated    min_size 1    max_size 4    step take ssd    step chooseleaf firstn 0 type diskarray    step emit}rule sata_ruleset {    ruleset 1    type replicated    min_size 1    max_size 5    step take sata    step chooseleaf firstn 0 type diskarray    step emit}# end crush map

重新编译为二进制:

$ crushtool -c crushmapnew.txt -o crushmapnew.map

导入ceph:

$ ceph osd setcrushmap -i crushmapnew.map

创建不同类型的pool:

$ ceph osd pool create ssdpool 128 ssd_ruleset$ ceph osd pool create satapool 128 sata_ruleset

注意:
0.94版本中,ceph osd需要设置:

osd_crush_update_on_start = false

否则在OSD启动时,OSD会自动变更到host这个容器下。

10.2.x版本未测试。

0 0
原创粉丝点击