ceph ubuntu14.10 手动安装 修正版

来源:互联网 发布:c语言的单引号和双引号 编辑:程序博客网 时间:2024/05/01 22:20

搭建环境:Ubuntu14.10

Ceph版本,0.80.09

参考网址:http://mirrors.myccdn.info/ceph/doc/docs_zh/output/html/

 环境为三个节点:

10.1.105.31 node1 (管理节点)

10.1.105.32 node2

10.1.105.33 node3

 

以下都是在root权限下操作

准备环境

在所有节点中修改/etc/ssh/sshd_config文件:(如果没有此文件,说明没有安装openssh-server,apt-get install openssh-server)

#vi /etc/ssh/sshd_config

把PermitRootLogin without-password

改为PermitRootLogin yes

然后执行service ssh restart

 

在所有节点中修改主机名:

#vi /etc/hostname (默认user改成node1, node2, node3)

设置各节点的hosts文件

#vi /etc/hosts

把默认的127.0.1.1 user 修改成127.0.1.1 node1(node1, node2, node3)

同时添加以下:

10.1.105.31 node1
10.1.105.32 node2

10.1.105.33 node3

获得ssh-keygen,几个节点相互复制,保证能够无密码访问

 #ssh-keygen(按回车,设置无密码)

复制密匙到各个节点,在每个节点执行:

ssh-copy-id root@node1 &&ssh-copy-id root@node2 && ssh-copy-id root@node3

检验是否可以相互无密码访问

在node1节点输入#ssh node2,#ssh node3,以此类推。

 

完成后在每个节点上安装依赖库:

apt-get install libaio1libsnappy1 libcurl3 curl libgoogle-perftools4 google-perftools libleveldb1

 

apt-get install autotools-dev autoconfautomake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgilibfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++libexpat1-dev

 

apt-get install uuid-dev libkeyutils-devlibgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata1*libsnappy-dev libleveldb-dev

 

安装ntp服务

apt-get install ntp

启动

/etc/init.d/ntp restart

 

此环境下可以直接安装ceph库

apt-get update && apt-get installceph-deploy

 

创建一个集群

ceph-deploy newnode1 node2 node3

 

安装ceph

ceph-deployinstall node1 node2 node3

 

创建集群的mon

ceph-deploy mon create node1 node2 node3

 

查看集群

root@node1:/etc/ceph# ceph -s

   cluster f7dd3522-20fe-4542-a07b-edbfe59c424e

    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; noosds; clock skew detected on mon.node2

    monmap e1: 2 mons at{node1=10.1.65.121:6789/0,node2=10.1.65.122:6789/0}, election epoch 4, quorum0,1 node1,node2

    osdmap e1: 0 osds: 0 up, 0 in

     pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects

           0 kB used, 0 kB / 0 kB avail

                 192 creating

root@node1:/etc/ceph# ceph -w

   cluster f7dd3522-20fe-4542-a07b-edbfe59c424e

    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; noosds; clock skew detected on mon.node2

    monmap e1: 2 mons at {node1=10.1.65.121:6789/0,node2=10.1.65.122:6789/0},election epoch 4, quorum 0,1 node1,node2

    osdmap e1: 0 osds: 0 up, 0 in

     pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects

           0 kB used, 0 kB / 0 kB avail

                 192 creating

2015-09-08 10:24:45.053626 mon.1 [WRN]message from mon.0 was stamped 0.342569s in the future, clocks not synchronized

 

修改池默认副本数

在[golabl]段里添加osd pool default size = 2

 

[global]

fsid = 1eb0d6cb-b4db-41b6-b610-48e58b812a6f

mon_initial_members = node1, node2, node3

mon_host =10.1.105.31,10.1.105.32,10.1.105.33

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

osd_pool_default_size = 2

 

[mon]

mon clock drift allowed = 2

mon clock drift warn backoff = 30

 

[osd.0]

host = node1

[osd.1]

host = node2

[osd.2]

host = node3

收集密钥

ceph-deploy gatherkeys node1

ceph-deploy gatherkeys node2

ceph-deploy gatherkeys node3

 

将config文件push到其他节点

ceph-deploy --overwrite-conf config pushnode2

ceph-deploy --overwrite-conf config pushnode3

 

查看硬盘分区

ceph-disk list /dev/sdb

或者

root@node1:/etc/ceph# ceph-deploy disk--zap-disk list node3:/dev/sdb

[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy disk--zap-disk list node3:/dev/sdb

[node3][DEBUG ] connected to host: node3

[node3][DEBUG ] detect platform informationfrom remote host

[node3][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.10 utopic

[ceph_deploy.osd][DEBUG ] Listing disks onnode3...

[node3][INFO  ] Running command: ceph-disk list

[node3][DEBUG ] /dev/sda :

[node3][DEBUG ]  /dev/sda1 other, ext2, mounted on /boot

[node3][DEBUG ]  /dev/sda2 other, 0x5

[node3][DEBUG ]  /dev/sda5 other, LVM2_member

[node3][DEBUG ] /dev/sdb other, unknown

[node3][DEBUG ] /dev/sdc other, unknown

[node3][DEBUG ] /dev/sr0 other, unknown

 

格式化硬盘

ceph-disk zap /dev/sdb

或者

root@node1:/etc/ceph# ceph-deploy disk--zap-disk zap node1:/dev/sdb

[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy disk--zap-disk zap node1:/dev/sdb

[ceph_deploy.osd][DEBUG ] zapping /dev/sdbon node1

[node1][DEBUG ] connected to host: node1

[node1][DEBUG ] detect platform informationfrom remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.10 utopic

[node1][DEBUG ] zeroing last few blocks ofdevice

[node1][INFO  ] Running command: sgdisk --zap-all --clear--mbrtogpt -- /dev/sdb

[node1][DEBUG ] GPT data structuresdestroyed! You may now partition the disk using fdisk or

[node1][DEBUG ] other utilities.

[node1][DEBUG ] The operation has completedsuccessfully.

 

 

添加osd(在一个磁盘上分data和journal区)

1.      简单添加,使用如下命令,会自prepare和activate并且自动创建目录并挂载,并data和journal区的大小自动分配。

root@node1:/etc/ceph# ceph-deploy osd--fs-type ext4 create node1:/dev/sdb node2:/dev/sdb node3:/dev/sdb

[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd--fs-type ext4 create node1:/dev/sdb node2:/dev/sdb node3:/dev/sdb

[ceph_deploy.osd][DEBUG ] Preparing clusterceph disks node1:/dev/sdb: node2:/dev/sdb: node3:/dev/sdb:

[node1][DEBUG ] connected to host: node1

[node1][DEBUG ] detect platform informationfrom remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.10 utopic

[ceph_deploy.osd][DEBUG ] Deploying osd tonode1

[node1][DEBUG ] write cluster configurationto /etc/ceph/{cluster}.conf

[node1][INFO  ] Running command: udevadm trigger--subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing hostnode1 disk /dev/sdb journal None activate True

[node1][INFO  ] Running command: ceph-disk-prepare--fs-type ext4 --cluster ceph -- /dev/sdb

[node1][WARNIN] mke2fs 1.42.10(18-May-2014)

[node1][DEBUG ] The operation has completedsuccessfully.

[node1][DEBUG ] The operation has completedsuccessfully.

[node1][DEBUG ] Creating filesystem with3931899 4k blocks and 983040 inodes

[node1][DEBUG ] Filesystem UUID:2832a433-4af5-449c-b57d-41cbdd0bd81b

[node1][DEBUG ] Superblock backups storedon blocks:

[node1][DEBUG ]         32768, 98304, 163840, 229376, 294912,819200, 884736, 1605632, 2654208

[node1][DEBUG ]

[node1][DEBUG ] Allocating group tables:done                           

[node1][DEBUG ] Writing inode tables:done                           

[node1][DEBUG ] Creating journal (32768blocks): done

[node1][DEBUG ] Writing superblocks andfilesystem accounting information: done  

[node1][DEBUG ]

[node1][DEBUG ] The operation has completedsuccessfully.

[node1][INFO  ] Running command: udevadm trigger--subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Host node1 is nowready for osd use.

[node2][DEBUG ] connected to host: node2

[node2][DEBUG ] detect platform informationfrom remote host

[node2][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.10 utopic

[ceph_deploy.osd][DEBUG ] Deploying osd tonode2

[node2][DEBUG ] write cluster configurationto /etc/ceph/{cluster}.conf

[node2][INFO  ] Running command: udevadm trigger--subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing hostnode2 disk /dev/sdb journal None activate True

[node2][INFO  ] Running command: ceph-disk-prepare--fs-type ext4 --cluster ceph -- /dev/sdb

[node2][WARNIN] mke2fs 1.42.10(18-May-2014)

[node2][DEBUG ] The operation has completedsuccessfully.

[node2][DEBUG ] The operation has completedsuccessfully.

[node2][DEBUG ] Creating filesystem with3931899 4k blocks and 983040 inodes

[node2][DEBUG ] Filesystem UUID:d168779b-ade5-4120-81d2-03b600ccad96

[node2][DEBUG ] Superblock backups storedon blocks:

[node2][DEBUG ]         32768, 98304, 163840, 229376, 294912,819200, 884736, 1605632, 2654208

[node2][DEBUG ]

[node2][DEBUG ] Allocating group tables:done                           

[node2][DEBUG ] Writing inode tables:done                           

[node2][DEBUG ] Creating journal (32768blocks): done

[node2][DEBUG ] Writing superblocks andfilesystem accounting information: done  

[node2][DEBUG ]

[node2][DEBUG ] The operation has completedsuccessfully.

[node2][INFO  ] Running command: udevadm trigger--subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Host node2 is nowready for osd use.

[node3][DEBUG ] connected to host: node3

[node3][DEBUG ] detect platform informationfrom remote host

[node3][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.10 utopic

[ceph_deploy.osd][DEBUG ] Deploying osd tonode3

[node3][DEBUG ] write cluster configurationto /etc/ceph/{cluster}.conf

[node3][INFO  ] Running command: udevadm trigger--subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing hostnode3 disk /dev/sdb journal None activate True

[node3][INFO  ] Running command: ceph-disk-prepare--fs-type ext4 --cluster ceph -- /dev/sdb

[node3][WARNIN] mke2fs 1.42.10(18-May-2014)

[node3][DEBUG ] The operation has completedsuccessfully.

[node3][DEBUG ] The operation has completedsuccessfully.

[node3][DEBUG ] Creating filesystem with3931899 4k blocks and 983040 inodes

[node3][DEBUG ] Filesystem UUID:5e62b1df-8196-4f16-8966-3049e3be18bc

[node3][DEBUG ] Superblock backups storedon blocks:

[node3][DEBUG ]         32768, 98304, 163840, 229376, 294912,819200, 884736, 1605632, 2654208

[node3][DEBUG ]

[node3][DEBUG ] Allocating group tables:done                           

[node3][DEBUG ] Writing inode tables:done                           

[node3][DEBUG ] Creating journal (32768blocks): done

[node3][DEBUG ] Writing superblocks andfilesystem accounting information: done  

[node3][DEBUG ]

[node3][DEBUG ] The operation has completedsuccessfully.

[node3][INFO  ] Running command: udevadm trigger --subsystem-match=block--action=add

[ceph_deploy.osd][DEBUG ] Host node3 is nowready for osd use.

查看磁盘情况

root@node1:/etc/ceph# ceph-disk list

/dev/sda :

 /dev/sda1 other, ext2, mounted on /boot

 /dev/sda2 other, 0x5

 /dev/sda5 other, LVM2_member

/dev/sdb :

 /dev/sdb1 ceph data, active, cluster ceph,osd.1, journal /dev/sdb2

 /dev/sdb2 ceph journal, for /dev/sdb1

/dev/sr0 other, unknown

root@node1:/etc/ceph# ceph -s

   cluster 8f4ceca7-2dbd-4280-92a9-3d0d9e059d84

    health HEALTH_WARN clock skew detected on mon.node2, mon.node3

    monmap e3: 3 mons at{node1=10.1.65.121:6789/0,node2=10.1.65.122:6789/0,node3=10.1.65.123:6789/0},election epoch 6, quorum 0,1,2 node1,node2,node3

    osdmap e15: 3 osds: 3 up, 3 in

     pgmap v28: 192 pgs, 3 pools, 0 bytes data, 0 objects

           120 MB used, 42497 MB / 44969 MB avail

                 192 active+clean

 

完成!!

 

后记:

root@node1:/home/hyx# ls/var/lib/ceph/osd/ceph-0/

activate.monmap  ceph_fsid        fsid             journal_uuid     lost+found/      ready            superblock       whoami          

active           current/         journal          keyring          magic            store_version    upstart 

 

ceph对象网关使用(RADOSGW)

ceph文件系统使用(CEPH FS)

ceph块存储使用(RBD)

创建存储池

root@node1:/etc/ceph# ceph osd pool createceph-pool 256 256

pool 'ceph-pool' created

创建块存储设备

root@node1:/etc/ceph# rbd create ceph-block--size 20480 --pool ceph-pool

将块存储设备映射到本地并挂载使用

root@node1:/etc/ceph# rbd map ceph-block--pool ceph-pool

 

root@node1:/etc/ceph# rbd showmapped

id pool     image      snap device   

0 ceph-pool ceph-block -   /dev/rbd0

 

root@node1:/etc/ceph# mkfs.ext4 /dev/rbd0

mke2fs 1.42.10 (18-May-2014)

Creating filesystem with 5242880 4k blocksand 1310720 inodes

Filesystem UUID:c1263955-3aba-42c2-b132-c82b562bcd8c

Superblock backups stored on blocks:

       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

       4096000

 

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (32768 blocks): done

Writing superblocks and filesystemaccounting information: done   

 

mount /dev/rbd0 /mnt

ls

lost+found

 

将块设备作为iscsi挂接

一、OSD Server side

1.安装支持rbd的TGT软件包

#echo "debhttp://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main" |sudo tee /etc/apt/sources.list.d/ceph-extras.list

#apt-get install tgt

2.安装完成后确认tgt支持rbd

# tgtadm --lld iscsi --op show --modesystem | grep rbd

   rbd (bsoflags sync:direct)

3.重启或者重载tgt服务

#service tgt reload

or

#service tgt restart

4.关闭rbd cache,否则可能导致数据丢失或者损坏

vi /etc/ceph/ceph.conf

[client]

rbd_cache = false

 

5.安装 iscsi target相关的软件

 apt-getinstall iscsitarget*

6.修改iscsi target的配置文件

    $ sudo vi /etc/default/iscsitarget

    ISCSITARGET_ENABLE=true   # changefalse to true

7.创建一个target,id=1,iqn=iqn.2013-02.node2, iqn是target在局域网内的唯一描述符

  $sudo tgtadm --lld iscsi --op new --mode target --tid 1 -Tiqn.2015-09-ubuntu.ceph-node1

8.可以通过下面命令查看当前创建的target和lun

   $sudo tgtadm --lld iscsi --op show --mode target

9.给指定的target增加一个lun,通过tid来制定target,这里将/dev/rbd0添加到tid=1的target中

   $sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/rbd0

10.要使某个target能够被initiator访问,必须先执行如下命令

   $sudo tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

11.删除指定的target,由tid来指定

   $sudo tgtadm --lld iscsi --op show --mode target --op delete --tid 1

12.需重启iscsitarget服务

ubuntu

   service iscsitarget restart

二、Client side

1、安装open-scsi

#apt-get install open-iscsi

2、启动open-scsi服务

# service open-iscsi restart

 *Unmounting iscsi-backed filesystems                                                                                                    [OK ]

 *Disconnecting iSCSI targets                                                                                                           [ OK ]

 *Stopping iSCSI initiator service                                                                                                       [OK ]

 *Starting iSCSI initiator service iscsid                                                                                               [ OK ]

 * Settingup iSCSI targets                                                                                                                     

iscsiadm: No records found                                                                                                                                    [OK ]

 *Mounting network filesystems

3、发现目标设备

# iscsiadm -m discovery -t st -p 10.1.105.31

10.1.105.31:3260,1iqn.2014-04.rbdstore.example.com:iscsi

4、挂载目标设备

# iscsiadm -m node --login

Logging in to [iface: default, target:iqn.2014-04.rbdstore.example.com:iscsi, portal: 10.10.2.50,3260] (multiple)

Login to [iface: default, target:iqn.2014-04.rbdstore.example.com:iscsi, portal: 10.10.2.50,3260] successful.

5、确认设备已经挂载(示例中rbd0就是iscsi设备)

root@node1:/etc/ceph# lsblk

NAME                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda                    8:0    0  40G  0 disk

鈹溾攢sda1                 8:1    0 243M  0 part /boot

鈹溾攢sda2                 8:2    0   1K  0 part

鈹斺攢sda5                 8:5    0 39.8G 0 part

  鈹溾攢node1--vg-root   252:0   0 35.8G  0 lvm  /

  鈹斺攢node1--vg-swap_1252:1    0    4G  0lvm  [SWAP]

sdb                    8:16   0  80G  0 disk

鈹溾攢sdb1                 8:17   0  75G  0 part/var/lib/ceph/osd/ceph-0

鈹斺攢sdb2                 8:18   0   5G  0 part

sr0                  11:0    1 1024M 0 rom 

rbd0                 251:0    0  20G  0 disk

 

 

0 0