ceph0.94安装

来源:互联网 发布:centos shutdown命令 编辑:程序博客网 时间:2024/05/17 23:43

Install ceph

docment: http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos

Config system
systemctl stop firewalld.service systemctl disable firewalld.service hostnamectl set-hostname ceph-osd-node1timedatectl set-timezone Asia/Shanghaiyum install chrony -ysystemctl enable chronyd.servicesystemctl start chronyd.servicechronyc sources
for RHEL/Centos
yum install centos-release-openstack-mitakayum install -y ftp://ftp.linux.kiev.ua/puias/updates/7.1/en/os/x86_64/python-setuptools-0.9.8-4.el7.noarch.rpmyum install ceph-deploy -y
Create Ceph Deploy User
# useradd ceph# passwd cephChanging password for user ceph.New password: BAD PASSWORD: The password is shorter than 8 charactersRetype new password: passwd: all authentication tokens updated successfully.# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph# chmod 0440 /etc/sudoers.d/ceph
No password login for ceph user
# su - ceph# pwd/home/ceph# ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/home/ceph/.ssh/id_rsa): Created directory '/home/ceph/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ceph/.ssh/id_rsa.Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.The key fingerprint is:a5:bd:fe:c7:57:e8:46:2d:71:a0:c7:1f:dc:0a:d1:8e ceph@ceph-osd-node1The key's randomart image is:+--[ RSA 2048]----+|             .   ||            . o  ||          .  *...||         +  E =oo||        S .  o B.||           .  = +||          .  + ..||         .    = .||          ...o . |+-----------------+# ll .ssh/total 8-rw-------. 1 ceph ceph 1675 Mar 16 15:46 id_rsa-rw-r--r--. 1 ceph ceph  401 Mar 16 15:46 id_rsa.pubssh-copy-id {username}@node1ssh-copy-id {username}@node2ssh-copy-id {username}@node3
Disable Firewall
# sudo systemctl stop firewalld.service # sudo systemctl disable firewalld.service
TTY config

在Ceph nodes上将Defaults requiretty 设置为Defaults:ceph !requiretty,使用sudo visudo定位并更改。

# visudo# Defaults    requirettyDefaults:ceph !requiretty

注: Centos7 没有 Defaults requiretty 只需要添加上Defaults:ceph !requiretty 即可

Create A Cluster

只在管理节点上运行即可

测试环境
$ cd my-cluster/$ ceph-deploy purgedata  ceph-osd-node1 ceph-osd-node2ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /bin/ceph-deploy purgedata ceph-osd-node1 ceph-osd-node2[ceph_deploy.cli][INFO  ] ceph-deploy options:[ceph_deploy.cli][INFO  ]  username                      : None[ceph_deploy.cli][INFO  ]  verbose                       : False[ceph_deploy.cli][INFO  ]  overwrite_conf                : False[ceph_deploy.cli][INFO  ]  quiet                         : False[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f52f20cb440>[ceph_deploy.cli][INFO  ]  cluster                       : ceph[ceph_deploy.cli][INFO  ]  host                          : ['ceph-osd-node1', 'ceph-osd-node2'][ceph_deploy.cli][INFO  ]  func                          : <function purgedata at 0x7f52f2910578>[ceph_deploy.cli][INFO  ]  ceph_conf                     : None[ceph_deploy.cli][INFO  ]  default_release               : False[ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts ceph-osd-node1 ceph-osd-node2[ceph-osd-node1][DEBUG ] connection detected need for sudo[ceph-osd-node1][DEBUG ] connected to host: ceph-osd-node1 [ceph-osd-node1][DEBUG ] detect platform information from remote host[ceph-osd-node1][DEBUG ] detect machine type[ceph-osd-node1][DEBUG ] find the location of an executable[ceph-osd-node2][DEBUG ] connection detected need for sudo[ceph-osd-node2][DEBUG ] connected to host: ceph-osd-node2 [ceph-osd-node2][DEBUG ] detect platform information from remote host[ceph-osd-node2][DEBUG ] detect machine type[ceph-osd-node2][DEBUG ] find the location of an executable[ceph-osd-node1][DEBUG ] connection detected need for sudo[ceph-osd-node1][DEBUG ] connected to host: ceph-osd-node1 [ceph-osd-node1][DEBUG ] detect platform information from remote host[ceph-osd-node1][DEBUG ] detect machine type[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core[ceph-osd-node1][INFO  ] purging data on ceph-osd-node1[ceph-osd-node1][INFO  ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph[ceph-osd-node1][INFO  ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/[ceph-osd-node2][DEBUG ] connection detected need for sudo[ceph-osd-node2][DEBUG ] connected to host: ceph-osd-node2 [ceph-osd-node2][DEBUG ] detect platform information from remote host[ceph-osd-node2][DEBUG ] detect machine type[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core[ceph-osd-node2][INFO  ] purging data on ceph-osd-node2[ceph-osd-node2][INFO  ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph[ceph-osd-node2][INFO  ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/
生成key
$ ceph-deploy forgetkeys [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /bin/ceph-deploy forgetkeys[ceph_deploy.cli][INFO  ] ceph-deploy options:[ceph_deploy.cli][INFO  ]  username                      : None[ceph_deploy.cli][INFO  ]  verbose                       : False[ceph_deploy.cli][INFO  ]  overwrite_conf                : False[ceph_deploy.cli][INFO  ]  quiet                         : False[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7efc20899ab8>[ceph_deploy.cli][INFO  ]  cluster                       : ceph[ceph_deploy.cli][INFO  ]  func                          : <function forgetkeys at 0x7efc210dec08>[ceph_deploy.cli][INFO  ]  ceph_conf                     : None[ceph_deploy.cli][INFO  ]  default_release               : False
清除ceph安装包
$ ceph-deploy purge ceph-osd-node1 ceph-osd-node2[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /bin/ceph-deploy purge ceph-osd-node1 ceph-osd-node2[ceph_deploy.cli][INFO  ] ceph-deploy options:[ceph_deploy.cli][INFO  ]  username                      : None[ceph_deploy.cli][INFO  ]  verbose                       : False[ceph_deploy.cli][INFO  ]  overwrite_conf                : False[ceph_deploy.cli][INFO  ]  quiet                         : False[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9abb797d88>[ceph_deploy.cli][INFO  ]  cluster                       : ceph[ceph_deploy.cli][INFO  ]  host                          : ['ceph-osd-node1', 'ceph-osd-node2'][ceph_deploy.cli][INFO  ]  func                          : <function purge at 0x7f9abbfde500>[ceph_deploy.cli][INFO  ]  ceph_conf                     : None[ceph_deploy.cli][INFO  ]  default_release               : False[ceph_deploy.install][INFO  ] note that some dependencies *will not* be removed because they can cause issues with qemu-kvm[ceph_deploy.install][INFO  ] like: librbd1 and librados2[ceph_deploy.install][DEBUG ] Purging on cluster ceph hosts ceph-osd-node1 ceph-osd-node2[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-osd-node1 ...[ceph-osd-node1][DEBUG ] connection detected need for sudo[ceph-osd-node1][DEBUG ] connected to host: ceph-osd-node1 [ceph-osd-node1][DEBUG ] detect platform information from remote host[ceph-osd-node1][DEBUG ] detect machine type[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core[ceph-osd-node1][INFO  ] Purging Ceph on ceph-osd-node1[ceph-osd-node1][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw[ceph-osd-node1][WARNIN] No Match for argument: ceph[ceph-osd-node1][WARNIN] No Match for argument: ceph-release[ceph-osd-node1][WARNIN] No Match for argument: ceph-common[ceph-osd-node1][WARNIN] No Match for argument: ceph-radosgw[ceph-osd-node1][INFO  ] Running command: sudo yum clean all[ceph-osd-node1][DEBUG ] Loaded plugins: fastestmirror[ceph-osd-node1][DEBUG ] Cleaning repos: base centos-ceph-hammer centos-openstack-mitaka centos-qemu-ev[ceph-osd-node1][DEBUG ]               : extras updates[ceph-osd-node1][DEBUG ] Cleaning up everything[ceph-osd-node1][DEBUG ] Cleaning up list of fastest mirrors[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-osd-node2 ...[ceph-osd-node2][DEBUG ] connection detected need for sudo[ceph-osd-node2][DEBUG ] connected to host: ceph-osd-node2 [ceph-osd-node2][DEBUG ] detect platform information from remote host[ceph-osd-node2][DEBUG ] detect machine type[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core[ceph-osd-node2][INFO  ] Purging Ceph on ceph-osd-node2[ceph-osd-node2][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw[ceph-osd-node2][WARNIN] No Match for argument: ceph[ceph-osd-node2][WARNIN] No Match for argument: ceph-release[ceph-osd-node2][WARNIN] No Match for argument: ceph-common[ceph-osd-node2][WARNIN] No Match for argument: ceph-radosgw[ceph-osd-node2][INFO  ] Running command: sudo yum clean all[ceph-osd-node2][DEBUG ] Loaded plugins: fastestmirror[ceph-osd-node2][DEBUG ] Cleaning repos: base centos-ceph-hammer centos-openstack-mitaka centos-qemu-ev[ceph-osd-node2][DEBUG ]               : extras updates[ceph-osd-node2][DEBUG ] Cleaning up everything[ceph-osd-node2][DEBUG ] Cleaning up list of fastest mirrors
Create Cluster
# ceph-deploy new  ceph-osd-node1 //initial-monitor-node(s) 

该命令将node1初始化为Monitor节点,同时生成Ceph configure file,也就是Ceph的配置文件。

edit ceph.conf add line
[global]osd_pool_default_size = 2 
Install Ceph
install ceph rpm
yum -y install ceph ceph-radosgw
ceph-deploy install ceph-osd-node1 ceph-osd-node2
添加初始monitor并收集keys
ceph-deploy mon create-initial
Add 4 OSDs
# ssh ceph-osd-node1# sudo mkdir /var/local/osd0 # sudo mkdir /var/local/osd1# chmod 777 /var/local/osd1 /var/local/osd1 //防止后文osd activate操作被拒绝 # exit# ssh  ceph-osd-node2 # sudo mkdir /var/local/osd2 # sudo mkdir /var/local/osd3 # chmod 777 /var/local/osd1  /var/local/osd3 # exitceph-deploy osd prepare ceph-osd-node1:/var/local/osd0  ceph-osd-node1:/var/local/osd1ceph-deploy osd prepare ceph-osd-node2:/var/local/osd2  ceph-osd-node2:/var/local/osd3ceph-deploy osd activate ceph-osd-node1:/var/local/osd0  ceph-osd-node1:/var/local/osd1ceph-deploy osd activate ceph-osd-node2:/var/local/osd2  ceph-osd-node2:/var/local/osd3
使用ceph-deploy将configure file和admin key拷贝到admin节点和Ceph节点上
ceph-deploy admin  ceph-osd-node1 ceph-osd-node2

delete ceph

yum erase ceph-deploy ceph-mon ceph-osd ceph ceph-radosgw python-cephfs ceph-common ceph-base ceph-mds libcephfs1 ceph-selinux