Centos6.5 下ceph 0.80.7源码安装配置

来源:互联网 发布:2017网络与新媒体大学 编辑:程序博客网 时间:2024/05/17 13:40

系统环境

本次ceph是在openstack平台上进行的,在openstack平台上创建6台instances,2台作为mon节点,4台作为osd节点。

Openstack-icehouse

ceph 0.80.7

Centos 6.5 x86-64

Ceph安装

升级内核

编译内核,添加rbd、ceph选项

安装依赖包

#rpm –ivh epel-release-6-8.noarch.rpm#yum -y install libuuid-devel libblkid-devel libudev-devel  keyutils-libs-devel cryptopp-devel  fuse-devel google-perftools-devel libedit-devel libatomic_ops-devel snappy-devel leveldb-devel libaio-devel xfsprogs-devel boost-devel python-pip redhat-lsb

其他配置

修改hosts文件如下
10.10.200.3 mon110.10.200.4 mon210.10.200.5 osd110.10.200.6 osd210.10.200.7 osd310.10.200.8 osd4

关闭防火墙
#service iptables stop

设置SSH无密码访问主机
#ssh-keygen#ssh-copy-id root@mon1#ssh-copy-id root@mon2#ssh-copy-id root@osd1#ssh-copy-id root@osd2#ssh-copy-id root@osd3#ssh-copy-id root@osd4

安装ceph

#tar –zxvf ceph-0.80-7.tar.gz#cd ceph-0.80.7#CXXFLAGS="-g -O2" ./configure --prefix=/usr/local --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc#make && make install

安装argparse

#pip install argparse

配置ceph库文件

#cp -vf /usr/local/lib/python2.6/site-packages/* /usr/lib64/python2.6/site-packages#echo /usr/local/lib >/etc/ld.so.conf.d/ceph.conf#ldconfig

Ceph配置

添加ceph配置文件

#cp ceph-0.80.7/src/sample.ceph.conf /etc/ceph/ceph.conf#cp ceph-0.80.7/src/sample.fetch_conf /etc/ceph/fetch_conf#cp ceph-0.80.7/src/init-ceph /etc/init.d/ceph


修改ceph配置文件ceph.conf如下,并将配置文件复制到各节点中

[global]    pid file                   = /var/run/ceph/$name.pid    auth cluster required      = cephx    auth service required      = cephx    auth client required       = cephx    keyring                  = /etc/ceph/keyring.admin[mon]    mon data                   = /mon[mon.alpha]    host                       = mon1     mon addr                   = 10.10.200.3:6789[mon.beta]    host                       = mon2    mon addr                   = 10.10.200.4:6789[mds][osd]osd data                     = /osd/$nameosd mkfs type = xfs osd journal                  = /osd/$name/journalkeyring = /etc/ceph/keyring.$nameosd crush update on start = false[osd.0]    host                         = osd1    devs                         = /dev/vdb[osd.1]    host                         = osd2     devs                         = /dev/vdb[osd.2]    host                         = osd3     devs                         = /dev/vdb[osd.3]    host                         = osd4     devs                         = /dev/vdb

在各osd端做以下操作

osd.0#mkdir /osd/osd.0#mkfs.xfs /dev/vdb#mount /dev/vdb /osd/osd.0osd.1#mkdir /osd/osd.1#mkfs.xfs /dev/vdb#mount /dev/vdb /osd/osd.1osd.2#mkdir /osd/osd.2#mkfs.xfs /dev/vdb#mount /dev/vdb /osd/osd.2osd.3#mkdir /osd/osd.3#mkfs.xfs /dev/vdb#mount /dev/vdb /osd/osd.3

启动ceph

初始化ceph

#mkcephfs -a -c /etc/ceph/ceph.conf

启动ceph

#/etc/init.d/ceph -a start

ceph状态

[root@mon1 ~]# ceph -s    cluster 1cf40c50-7283-4653-9fe1-56a633df5d24     health HEALTH_OK     monmap e1: 2 mons at {alpha=10.10.200.3:6789/0,beta=10.10.200.4:6789/0}, election epoch 12, quorum 0,1 alpha,beta     osdmap e36: 4 osds: 4 up, 4 in      pgmap v102: 768 pgs, 3 pools, 132 bytes data, 2 objects            20639 MB used, 579 GB / 599 GB avail                 768 active+clean


osd状态

[root@mon1 ~]# ceph osd tree# id    weight  type name       up/down reweight-1      4       root default-3      4               rack unknownrack-2      1                       host osd10       1                               osd.0   up      1-4      1                       host osd21       1                               osd.1   up      1-5      1                       host osd32       1                               osd.2   up      1-6      1                       host osd43       1                               osd.3   up      1

Ceph安装问题解决

Q1:在启动ceph osd过程中,碰到df: `/osd/osd.0/.': No such file or directory,df: no file systems processed问题
解决方法:
在ceph.conf配置文件OSD段添加
osd crush update on start = false

Q2:在正常启动ceph后,ceph -s查看ceph状态,提示有许多pg处于degrade状态
解决方法:
#ceph osd getcrushmap -o /tmp/crushmap#crushtool -d /tmp/crushmap -o /tmp/crushmap.txt#vi /tmp/crushmap.txtFind a line: "step chooseleaf firstn 0 type host" and change it to "step chooseleaf firstn 0 type osd".#crushtool -c /tmp/crushmap.txt -o /tmp/crushmap.new #ceph osd setcrushmap -i /tmp/crushmap.new
重启ceph
#/etc/init.d/ceph -a restart

Q3:在正常启动ceph后,ceph -s查看ceph状态,提示信息ALTH_WARN clock skew detected on mon.beta
解决方法:
两台mon服务器分别同步时间,因为自己搭建的时间服务器,所以使用内部时间服务器
#ntpdate -u 10.10.200.163





 
0 0
原创粉丝点击