ceph - 更改 ceph journal 位置

来源:互联网 发布:ppt mac版如何加页码 编辑:程序博客网 时间:2024/05/28 08:29

目的

分离 ceph data 与 journal 位置

环境

参考下面 ceph 架构

[root@ceph-gw-209214 ~]# ceph osd tree# id    weight  type name       up/down reweight-1      12      root default-2      3               host ceph-gw-2092140       1                       osd.0   up      11       1                       osd.1   up      12       1                       osd.2   up      1-4      3               host ceph-gw-2092166       1                       osd.6   up      17       1                       osd.7   up      18       1                       osd.8   up      1-5      3               host ceph-gw-2092179       1                       osd.9   up      110      1                       osd.10  up      111      1                       osd.11  up      1-6      3               host ceph-gw-2092193       1                       osd.3   up      14       1                       osd.4   up      15       1                       osd.5   up      1

参考对应磁盘分区

192.168.209.214/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-0/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-1/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-2192.168.209.219/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-3/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-4/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-5192.168.209.216/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-7/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-8/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-6192.168.209.217/dev/sdc1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-10/dev/sdb1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-9/dev/sdd1                 50G  1.1G   49G   3% /var/lib/ceph/osd/ceph-11

查询当前日志位置

[root@ceph-gw-209214 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep osd_journal  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-0\/journal",[root@ceph-gw-209214 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-1\/journal",

默认状态下, ceph journal 都存放到 /var/run/ceph/ceph-osd.$num/journal 位置中

修改 journal 方法:

创建目录

脚本

#!/bin/bashLANG=en_USnum=0for ip in  $ipsdo        diskpart=`ssh $ip "fdisk -l  | grep Linux | grep -v sda" | awk '{print $1}' | sort`        for partition in $diskpart        do                ssh $ip "mkdir /var/log/ceph-$num"                let num++        donedone

效果如下:

192.168.209.214 drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-0drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-1drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-2192.168.209.219drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-3drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-4drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-5192.168.209.216drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-6drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-7drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-8192.168.209.217drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-10drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-11drwxr-xr-x 2 root root 6 Dec 24 11:04 /var/log/ceph-9

修改配置文件

/etc/ceph/ceph.conf 中添加下面配置

[osd]osd journal = /var/log/$cluster-$id/journal

替换 OSD 步骤

验证当前 journal 位置

[root@ceph-gw-209214 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-1\/journal",

设 noout, 并停止 osd

[root@ceph-gw-209214 ceph]# ceph osd set nooutset noout[root@ceph-gw-209214 ceph]# /etc/init.d/ceph  stop osd.1=== osd.1 ===Stopping Ceph osd.1 on ceph-gw-209214...kill 2744...kill 2744...done

手动移动日志

[root@ceph-gw-209214 ceph]# mv /var/lib/ceph/osd/ceph-1/journal  /var/log/ceph-1/

启动 osd

[root@ceph-gw-209214 ceph]# /etc/init.d/ceph  start osd.1=== osd.1 ===create-or-move updated item name 'osd.1' weight 0.05 at location {host=ceph-gw-209214,root=default} to crush mapStarting Ceph osd.1 on ceph-gw-209214...Running as unit run-13260.service.[root@ceph-gw-209214 ceph]# ceph osd unset nooutunset noout

验证

[root@ceph-gw-209214 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep osd_journal  "osd_journal": "\/var\/log\/ceph-1\/journal",  "osd_journal_size": "1024",[root@ceph-gw-209214 ceph]# ceph -s    cluster 1237dd6a-a4f6-43e0-8fed-d9bcc8084bf1     health HEALTH_OK     monmap e1: 2 mons at {ceph-gw-209214=192.168.209.214:6789/0,ceph-gw-209216=192.168.209.216:6789/0}, election epoch 8, quorum 0,1 ceph-gw-209214,ceph-gw-209216     osdmap e189: 12 osds: 12 up, 12 in      pgmap v6474: 4560 pgs, 10 pools, 1755 bytes data, 51 objects            10725 MB used, 589 GB / 599 GB avail                4560 active+clean
0 0