ubuntu14部署ceph10.2版本

来源:互联网 发布:耶路撒冷 知乎 编辑:程序博客网 时间:2024/05/21 16:04
背景:
1.ubuntu1404 阿里云的源+mitaka的源
2.三个节点 compute1 compute2 compute3 IP分别是10.1.14.22/23/24,由于ceph-deploy用不上,controller节点暂时没用
3.compute1为mon节点,三个节点个两块硬盘,6个osd


步骤:
1.三台机器:apt-get install ceph ceph-mds ceph-common ceph-fs-common gdisk
2.在compute1:
2.1 
vim /etc/ceph/ceph.conf
[global]
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
mon initial members = compute1
mon host = 10.1.14.22
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024 
rbd default features = 1  解释参考这里:http://docs.ceph.com/docs/master/release-notes/


[osd]
osd max object name len = 256   解释参考这里:http://docs.ceph.com/docs/master/release-notes/
osd max object namespace len = 64 解释参考这里:http://docs.ceph.com/docs/master/release-notes/
rbd default features = 1 解释参考这里:http://docs.ceph.com/docs/master/release-notes/


2.2:
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
monmaptool --create --add compute1 10.1.14.22 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
mkdir /var/lib/ceph/mon/ceph-compute1
ceph-mon --mkfs -i compute1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
touch /var/lib/ceph/mon/ceph-compute1/done
touch /var/lib/ceph/mon/ceph-compute1/upstart
chown -R ceph:ceph /var/lib/ceph/
start ceph-mon id=compute1
验证:ceph -s
mon节点部署完毕;
3.部署osd节点
compute1:
ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/vdb --zap-disk
ceph-disk activate /dev/vdb1
ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/vdc --zap-disk
ceph-disk activate /dev/vdc1


compute2:
scp compute1:/etc/ceph/* /etc/ceph
scp compute1:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd
ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/vdb --zap-disk
ceph-disk activate /dev/vdb1
ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/vdc --zap-disk
ceph-disk activate /dev/vdc1


现在ceph -s发现还是不OK的,因为最少需要三个节点,每个pg选择的osd必须不在同一个host上


部署compute3,操作步骤和compute2一模一样;然后验证:
root@compute1:~# rbd ls
root@compute1:~# rbd create aa --size  4096
root@compute1:~# rbd ls
aa
root@compute1:~# rbd map aa
/dev/rbd1
root@compute1:~# mkfs.ext4 /dev/rbd1
root@compute1:~# mkdir rbd
root@compute1:~# mount /dev/rbd1 rbd
root@compute1:~# cd rbd/
root@compute1:~/rbd# dd if=/dev/zero of=aa.data bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 14.488 s, 72.4 MB/s











0 0