centos7搭建ceph

来源:互联网 发布:java贴吧 编辑:程序博客网 时间:2024/06/03 07:00

1:环境
宿主局:CentOS7
镜 像:docker官方centos:latest
2:创建用于初始化的镜像

[root@sjc_dev ~]# docker run -d --name init_7 --hostname init_7 60e65a8e4030 /bin/bash -c 'for((i=0;1;i++));do echo $i;sleep 1;done;'

3:初始化

[root@init_7 /]# yum install telnet openssh-clients openssh vim openssh-server openssl openssl-devel telnet lsof nc wget curl net-tools -y[root@init_7 /]# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm[root@init_7 /]# wget http://download.ceph.com/rpm/el7/noarch/ceph-release-1-0.el7.noarch.rpm[root@init_7 /]# yum groupinstall Development tools -y设置清屏:[root@init_7 /]# cat ~/.bashrc# .bashrc# User specific aliases and functionsalias rm='rm -i'alias cp='cp -i'alias mv='mv -i'# Source global definitionsif [ -f /etc/bashrc ]; then. /etc/bashrcfiexport TERM=linux also works·[root@init_7 /]# rpm -ivh ceph-release-1-0.el7.noarch.rpm [root@init_7 /]# rpm -ivh epel-release-latest-7.noarch.rpm[root@init_7 /]# yum install iptables-services [root@init_7 /]# systemctl start sshd[root@init_7 /]# ssh-keygen [root@init_7 /]# ssh-copy-id -i root@172.17.0.2

4:设置ceph的下载地址和验证秘钥地址为163

[Ceph]name=Ceph packages for $basearchbaseurl=http://mirrors.163.com/ceph/rpm-hammer/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors.163.com/ceph/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl=http://mirrors.163.com/ceph/rpm-hammer/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors.163.com/ceph/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=http://mirrors.163.com/ceph/rpm-hammer/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors.163.com/ceph/keys/release.asc

5:设置环境变量

[root@init_7 yum.repos.d]# export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-hammer/el7/[root@init_7 yum.repos.d]# export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc[root@init_7 /]# cat ~/.bashrc# .bashrc# User specific aliases and functionsalias rm='rm -i'alias cp='cp -i'alias mv='mv -i'# Source global definitionsif [ -f /etc/bashrc ]; then. /etc/bashrcfiexport TERM=linux also worksexport CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-hammer/el7/export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc[root@init_7 /]# . ~/.bashrc

6:时间同步

[root@init_7 ~]# rm /etc/localtime rm: remove symbolic link ‘/etc/localtime’? y[root@init_7 ~]# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime[root@localhost ~]# ntpdate cn.ntp.org.cn26 Sep 14:30:30 ntpdate[7391]: adjust time server 182.92.12.11 offset 0.003023 sec

7:容器生成镜像

[root@sjc_dev ~]# docker commit -m "ceph init_7 163 repo" init_7 centos7-ceph-init

8:创建实验容器

[root@sjc_dev data]# docker run -d --name admin --privileged -v /opt/docker/data/el7/admin:/data --hostname admin centos7-ceph-init[root@sjc_dev data]# docker run -d --name monitor --privileged -v /opt/docker/data/el7/monitor:/data --hostname monitor centos7-ceph-init[root@sjc_dev data]# docker run -d --name osd01 --privileged -v /opt/docker/data/el7/osd01:/data --hostname osd01 centos7-ceph-init[root@sjc_dev data]# docker run -d --name osd02 --privileged -v /opt/docker/data/el7/osd02:/data --hostname osd02 centos7-ceph-init

9:登录admin管理节点

环境

admin 172.17.0.3
monitor 172.17.0.4
osd01 172.17.0.5
osd02 172.17.0.6

[root@sjc-el7 ~]# docker exec -it admin /bin/bash[root@admin /]# mkdir /data/ceph-cluster[root@admin /]# cd /data/ceph-cluster[root@admin ceph-cluster]# yum install ceph-deploy

10:设置主机名并且拷贝到其他节点

[root@admin ceph-cluster]# scp /etc/hosts root@172.17.0.4:/etc/hosts[root@admin ceph-cluster]# scp /etc/hosts root@172.17.0.5:/etc/hosts[root@admin ceph-cluster]# scp /etc/hosts root@172.17.0.6:/etc/hosts   [root@admin ceph-cluster]# cat /etc/hosts...172.17.0.3 admin       172.17.0.4 monitor      172.17.0.5 osd01        172.17.0.6 osd02

11:开始安装

[root@admin ceph-cluster]# ceph-deploy new monitor[root@admin ceph-cluster]# lltotal 12-rw-r--r-- 1 root root 3627 Sep 26 08:39 ceph-deploy-ceph.log-rw-r--r-- 1 root root  221 Sep 26 08:40 ceph.conf-rw------- 1 root root   73 Sep 26 08:39 ceph.mon.keyring

由于实验环境只有两个osd节点,所以新增osd pool default size = 2到配置文件

[root@admin ceph-cluster]# cat ceph.conf [global]fsid = 805bb165-286c-4bfd-a166-08482652841fmon_initial_members = monitormon_host = 172.17.0.4auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxosd pool default size = 2

如果你有多个网卡,可以把 public network 写入 Ceph 配置文件的 [global] 段下。

安装ceph

[root@admin ceph-cluster]# ceph-deploy install admin monitor osd01 osd02

初始化监控

[root@admin ceph-cluster]# ceph-deploy mon create-initial

测试用目录来做osd,分别在两个osd执行

[root@osd01 ~]# mkdir /var/local/osd01[root@osd02 ~]# mkdir /var/local/osd02

准备和激活osd

[root@admin ceph-cluster]# ceph-deploy osd prepare osd01:/var/local/osd01 osd02:/var/local/osd02[root@admin ceph-cluster]# ceph-deploy osd activate osd01:/var/local/osd01 osd02:/var/local/osd02

用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了

[root@admin ceph-cluster]# ceph-deploy admin admin monitor osd01 osd02

查看状态(又在osd节点的data下创建了两个目录,增加为osd)

[root@admin ceph-cluster]# ceph -s    cluster 805bb165-286c-4bfd-a166-08482652841f     health HEALTH_OK     monmap e1: 1 mons at {monitor=172.17.0.4:6789/0}            election epoch 2, quorum 0 monitor     osdmap e19: 4 osds: 4 up, 4 in      pgmap v74: 64 pgs, 1 pools, 0 bytes data, 0 objects            39045 MB used, 181 GB / 219 GB avail                  64 active+clean

12:块设备使用
创建客户端

[root@sjc-el7 ~]# docker run -d --name client --privileged -v /opt/docker/data/el7/client:/data --hostname client centos7-ceph-init

为客户端安装ceph

[root@client ~]# yum install ceph -y

拷贝ceph的配置文件,和用户秘钥

[root@admin ceph-cluster]# scp ceph.conf root@172.17.0.7:/etc/ceph/[root@admin ceph-cluster]# scp ceph.client.admin.keyring root@172.17.0.7:/etc/ceph/

创建pool和块设备映像

[root@client ~]# ceph osd pool create swimmingpool 64[root@client ~]# rados lspoolsrbdswimmingpool

例如,要在 swimmingpool 这个存储池中创建一个名为 bar 、大小为 1GB 的映像,执行:

[root@client ~]# rbd create --size 1024 swimmingpool/bar

罗列块设备映像,默认展示的是rbd的存储池

[root@client ~]# rbd ls[root@client ~]# rbd ls swimmingpoolbar

查看映像信息

[root@client ~]# rbd info swimmingpool/barrbd image 'bar':size 1024 MB in 256 objectsorder 22 (4096 kB objects)block_name_prefix: rb.0.1030.238e1f29format: 1

调整大小

[root@client ~]# rbd resize --size 2048 swimmingpool/barResizing image: 100% complete...done.[root@client ~]# rbd info swimmingpool/barrbd image 'bar':size 2048 MB in 512 objectsorder 22 (4096 kB objects)block_name_prefix: rb.0.1030.238e1f29format: 1

映射成内核模块 报错(docker的原因) 解决办法 http://digoal.lofter.com/post/6ced3_4d15dc

[root@client /]# rbd map swimmingpool/bar --id adminmodinfo: ERROR: Module alias rbd not found.rbd: failed to load rbd kernel module (1)rbd: sysfs write failedrbd: map failed: (2) No such file or directory

由于报错,这里改为宿主机挂载,执行前述步骤,1:设置ceph的epel 2:安装ceph 3:从admin拷贝ceph.conf 和 ceph.client.admin.keyring

挂载

[root@sjc-el7 ceph]# rbd map swimmingpool/bar --id admin/dev/rbd0[root@sjc-el7 ceph]# ll /dev/rbdtotal 0drwxr-xr-x 2 root root 60 Sep 28 12:03 swimmingpool[root@sjc-el7 ceph]# ll /dev/rbd0brw-rw---- 1 root disk 252, 0 Sep 28 12:03 /dev/rbd0

分区并且格式化

[root@sjc-el7 ceph]# fdisk -c /dev/rbd0Welcome to fdisk (util-linux 2.23.2).Changes will remain in memory only, until you decide to write them.Be careful before using the write command.Device does not contain a recognized partition tableBuilding a new DOS disklabel with disk identifier 0xc90b866d.Command (m for help): pDisk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 4194304 bytes / 4194304 bytesDisk label type: dosDisk identifier: 0xc90b866d     Device Boot      Start         End      Blocks   Id  SystemCommand (m for help): nPartition type:   p   primary (0 primary, 0 extended, 4 free)   e   extendedSelect (default p): pPartition number (1-4, default 1): First sector (8192-4194303, default 8192): Using default value 8192Last sector, +sectors or +size{K,M,G} (8192-4194303, default 4194303): Using default value 4194303Partition 1 of type Linux and of size 2 GiB is setCommand (m for help): pDisk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 4194304 bytes / 4194304 bytesDisk label type: dosDisk identifier: 0xc90b866d     Device Boot      Start         End      Blocks   Id  System/dev/rbd0p1            8192     4194303     2093056   83  LinuxCommand (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.[root@sjc-el7 ceph]# mkfs.ext4 /dev/rbd0mke2fs 1.42.9 (28-Dec-2013)Discarding device blocks: done                            Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=1024 blocks, Stripe width=1024 blocks131072 inodes, 524288 blocks26214 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=53687091216 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912Allocating group tables: done                            Writing inode tables: done                            Creating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: done

挂载,写测试数据

[root@sjc-el7 ceph]# mount /dev/rbd0 /mnt[root@sjc-el7 ceph]# cd /mnt/[root@sjc-el7 mnt]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda1        20G  2.5G   17G  14% /devtmpfs        1.9G     0  1.9G   0% /devtmpfs           1.9G     0  1.9G   0% /dev/shmtmpfs           1.9G   49M  1.9G   3% /runtmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup/dev/sdb        100G   15G   86G  15% /opttmpfs           380M     0  380M   0% /run/user/0/dev/rbd0       2.0G  6.0M  1.8G   1% /mnt[root@sjc-el7 mnt]# dd if=/dev/zero of=./test-ceph-block bs=512M count=11+0 records in1+0 records out536870912 bytes (537 MB) copied, 2.16404 s, 248 MB/s

查看状态

[root@sjc-el7 mnt]# ceph -w    cluster 805bb165-286c-4bfd-a166-08482652841f     health HEALTH_OK     monmap e1: 1 mons at {monitor=172.17.0.4:6789/0}            election epoch 2, quorum 0 monitor     osdmap e23: 4 osds: 4 up, 4 in      pgmap v187: 128 pgs, 2 pools, 390 MB data, 106 objects            41254 MB used, 179 GB / 219 GB avail                 128 active+clean  client io 1473 B/s rd, 3803 kB/s wr, 4 op/s2017-09-28 12:06:50.337961 mon.0 [INF] pgmap v187: 128 pgs: 128 active+clean; 390 MB data, 41254 MB used, 179 GB / 219 GB avail; 1473 B/s rd, 3803 kB/s wr, 4 op/s

拓展: 问题: 如果已经挂载的映像,我执行前述的resize会发生哪些变化

[root@sjc-el7 mnt]# rbd resize --size 3072 swimmingpool/bar Resizing image: 100% complete...done.[root@sjc-el7 mnt]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda1        20G  2.5G   17G  14% /devtmpfs        1.9G     0  1.9G   0% /devtmpfs           1.9G     0  1.9G   0% /dev/shmtmpfs           1.9G   49M  1.9G   3% /runtmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup/dev/sdb        100G   16G   85G  16% /opttmpfs           380M     0  380M   0% /run/user/0/dev/rbd0       2.0G  519M  1.3G  29% /mnt

fdisk -c /dev/rbd0大小变为最新大小,但是已经有的分区丢失

[root@sjc-el7 mnt]# fdisk -l /dev/rbd0Disk /dev/rbd0: 3221 MB, 3221225472 bytes, 6291456 sectors