在CentOS 7部署配置OpenStack Juno(ml2+vlan)版ALLINONE环境

来源:互联网 发布:java开发android应用 编辑:程序博客网 时间:2024/06/10 13:39

Version=Juno-2014.2.1 CentOS 7 minimal


作者易明(oyym)  QQ:517053733

 邮箱:oyym.mv@gmail.com

地址:http://blog.csdn.net/oyym_mv/article/details/42804223

本部署配置测试采用CentOS 7最小化安装操作系统部署配置OpenStack Juno版ALL-IN-ONE环境,在部署配置过程中极有可能出现配置错误又找不出错误出现在哪里,需要重新搭建测试环境,因此采用搭建虚拟机测试环境,虚拟机环境有很多好处:

  1. 可以随时拷贝备份,例如配置完成基础环境备份,每成功部署一个组件就备份一次而下面的部署配置出现错误可以回滚到最近的备份,避免全部推倒重来。
  2. 我们可以利用的物理服务器网卡资源有限,有些部署测试模拟起来可能非常麻烦,而虚拟机可以构建任意数量的虚拟网卡设备,想要几个就配置几个,很爽
首先我们需要有一个物理主机用于搭建虚拟机测试环境,这个物理主机要能够访问互联网,便于虚拟机安装部署相关工具以及云平台组件。物理主机搭建系统、构建虚拟机环境就不在这里介绍啦。简要说明下环境,以图为例:


以上图例中的VM(CentOS7)就是下面用来部署测试OpenStackJuno-2014.2.1云平台的虚拟机。虚拟机需要预留一个分区用于配置LVM后端块存储服务。如果虚拟机文件没有预留分区可以采用qemu-img resize命令扩展空间vda空间用于创建新的分区。
OpenStack Juno版本有11个核心组件,本次部署测试组件不包括独立性比较强的对象服务组件(Swfit),数据库服务组件(Trove),还有刚刚加入核心组件大家庭的大数据服务组件(Sahara)。在本次测试过程中部署8个核心组件,认证服务组件(Keystone),镜像管理服务组件(Glance,后端采用File模式),网络服务组件(Neutron,采用ml2+vlan模式),计算服务组件(Nova),块存储服务组件(Cinder,后端采用LVM),WebUI管理服务组件(Horizon),计量服务组件(Ceilometer),编配组件(Heat)。说明了部署配置目标,下面开始进行对虚拟机进行一系列的部署配置。

第一节:基础环境部署配置

虚拟机操作系统为CentOS 7 X86_64 最小化安装,如下进行基本系统配置与网络配置,

1. 开启软件包缓存机制,缓存下来的软件包用于制作自定义YUM源

vi /etc/yum.conf [main]...cachedir=/var/cache/yum/packageskeepcache=1
2.  配置网络(3个网口应用于管理网络、外部网络、数据网络)
vi /etc/sysconfig/network-scripts/ifcfg-eth0 #管理网络,其他参数不变DEVICE=eth0NAME=eth0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=172.16.100.101PREFIX=24vi /etc/sysconfig/network-scripts/ifcfg-eth1 #外部网络,其他参数不变DEVICE=eth1NAME=eth1TYPE=EthernetONBOOT=yesBOOTPROTO=nonevi /etc/sysconfig/network-scripts/ifcfg-eth2 #数据(业务网络),其他参数不变,ALL-IN-ONE环境该网口可以没有  DEVICE=eth2  NAME=eth2  TYPE=Ethernet  ONBOOT=yes  BOOTPROTO=none
3. 配置主机名称、本地域名解析
vi /etc/hosts...172.16.100.101 controller controller.domain.comvi /etc/hostnamecontroller.domain.com#重新ssh登录,可以看到主机名称已经变更[root@controller ~]# 
4.  配置域名解析服务,安装软件包需要域名解析服务
vi /etc/NetworkManager/NetworkManager.conf[main]...dns=none<strong>systemctl restart NetworkManager.service </strong>vi /etc/resolv.conf...nameserver 114.114.114.114nameserver 8.8.8.8#验证域名解析服务器是否配置成功ping www.baidu.comPING www.a.shifen.com (115.239.211.112) 56(84) bytes of data.64 bytes from 115.239.211.112: icmp_seq=1 ttl=53 time=110 ms64 bytes from 115.239.211.112: icmp_seq=3 ttl=53 time=114 ms64 bytes from 115.239.211.112: icmp_seq=4 ttl=53 time=10.8 ms
5. 配置SELINUX为许可模式
vi /etc/selinux/config...SELINUX=permissive
6. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_pure.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第二节:时间服务(NTP),Juno安装源,数据库(MySQL/MariaDB),消息队列(RabbitMQ)部署配置

1. 安装配置NTP服务

1) 安装NTP软件包yum install ntp -y2) 编辑配置文件/etc/ntp.confvi /etc/ntp.conf...# 将172.16.100.1 替换为需要的ntp服务器地址,将配置文件中不需要的NTP服务器注释掉server 172.16.100.1 iburstrestrict -4 default kod notrap nomodifyrestrict -6 default kod notrap nomodify3) 设置自启动模式,启动服务,查看启动状态systemctl enable ntpd.servicesystemctl start ntpd.servicesystemctl status ntpd.service
2. 配置OpenStack Juno版软件安装源
1) 安装源相关包,yum配置参数priority,需要如下软件包yum install yum-plugin-priorities -y2) 配置OpenStack Juno版软件安装源yum install yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -yyum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm -y3) 查看配置文件确认配置成功cat rdo-release.repo [openstack-juno]name=OpenStack Juno Repositorybaseurl=http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/enabled=1skip_if_unavailable=0gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Juno4) 系统升级,升级后如果有多个内核存储,删除旧文件yum provides '*/applydeltarpm'yum upgrade -y5) 安装openstack 服务安全策略自动管理,安装时间稍长yum install openstack-selinux -y
3. 数据库(MySQL/MariaDB)部署配置

1) 安装数据库软件包以及相关驱动yum install mariadb mariadb-server MySQL-python -y2) 编辑配置文件/etc/my.cnf,配置数据库服务a. 编辑[mysqld]部分,配置bind-address为管理网络地址172.16.100.101,使云平台其他节点可以连接数据库服务[mysqld]...bind-address = 172.16.100.101b. 编辑[mysqld]部分调优mysql服务,设置字符集为UTF-8[mysqld]...default-storage-engine = innodbinnodb_file_per_tablecollation-server = utf8_general_ciinit-connect = 'SET NAMES utf8'character-set-server = utf83) 完成数据库部署配置a. 配置自启动模式,启动服务systemctl enable mariadb.servicesystemctl start mariadb.servicesystemctl status mariadb.serviceb.数据库安全配置mysql_secure_installationEnter current password for root (enter for none):[Enter]Set root password? [Y/n] YNew password: openstackRe-enter new password:openstackRemove anonymous users? [Y/n] YDisallow root login remotely? [Y/n] nRemove test database and access to it? [Y/n] YReload privilege tables now? [Y/n] Y
4. 消息队列(RabbitMQ)部署配置

1) 安装消息队列软件包yum install rabbitmq-server -y2) 设置自启动模式、启动服务systemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.servicesystemctl status rabbitmq-server.service3) [Optional]修改guest用户密码rabbitmqctl change_password guest guest//在本次测试中不修改密码,该步骤不执行
5. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_pure.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第三节:认证服务(Keystone)部署配置、验证

1. 准备工作(Keystone服务配置数据库信息)
1) 使用mysql客户端连接mysql数据库mysql -u root –popenstackMariaDB [(none)]> 2) 创建Keystone数据库CREATE DATABASE keystone;3) 为Keystone数据库授权GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystoneDB';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystoneDB';4) 退出数据库连接MariaDB [(none)]> exit
2. 部署配置Keystone服务组件
1) 安装认证服务相关软件包yum install openstack-keystone python-keystoneclient -y//修改配置前备份原始配置文件mkdir /etc/keystone/.oricp -r /etc/keystone/* /etc/keystone/.ori/rm -f /etc/keystone/default_catalog.templates2) 编辑配置文件/etc/keystone.conf,完成如下配置a. 编辑[DEFAULT]部分,配置初次认证ADMIN TOKEN(随便一个字符串就行)[DEFAULT]...admin_token = openstack_admin_tokenb. 编辑[database]部分,配置数据库访问权限[database]...connection = mysql://keystone:keystoneDB@controller/keystonec. 编辑[token]部分,配置token类型,SQL驱动[token]...provider = keystone.token.providers.uuid.Providerdriver = keystone.token.persistence.backends.sql.Token3) 创建证书、密钥;约束相关文件访问权限keystone-manage pki_setup --keystone-user keystone --keystone-group keystonechown -R keystone:keystone /var/log/keystonechown -R keystone:keystone /etc/keystone/sslchmod -R o-rwx /etc/keystone/ssl4) 执行生成认证服务数据库数据表su -s /bin/sh -c "keystone-manage db_sync" keystone
3. 完成认证服务部署配置
1) 配置自启动模式,启动服务systemctl enable openstack-keystone.servicesystemctl start openstack-keystone.servicesystemctl status openstack-keystone.service2) Crontab任务,定期清理过期Token数据(crontab -l -u keystone 2>&1 | grep -q token_flush) || \echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone 
4. 创建租户、用户、角色(包含验证操作)
1)加载执行环境变量export OS_SERVICE_TOKEN=openstack_admin_tokenexport OS_SERVICE_ENDPOINT=http://controller:35357/v2.02)创建管理员租户(admin),用户(admin),角色(admin)a.租户创建adminkeystone tenant-create --name admin --description "Admin Tenant"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |           Admin Tenant           ||   enabled   |               True               ||      id     | 7bf30749e195436c899d42dd879505bb ||     name    |              admin               |+-------------+----------------------------------+b.用户创建adminkeystone user-create --name admin --pass admin --email admin@ostack.com+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |         admin@ostack.com         || enabled  |               True               ||    id    | aad928d305c44810b215f059168b44f3 ||   name   |              admin               || username |              admin               |+----------+----------------------------------+c.角色创建adminkeystone role-create --name admin+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|    id    | a6a3766bf320473095ef4509fbf5e06a ||   name   |              admin               |+----------+----------------------------------+d.租户用户角色授权keystone user-role-add --user admin --tenant admin --role admin3)创建测试租户(demo),用户(demo)a.创建demo租户keystone tenant-create --name demo --description "Demo Tenant"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |           Demo Tenant            ||   enabled   |               True               ||      id     | 9f77ec788b91475b869abba7f1091017 ||     name    |               demo               |+-------------+----------------------------------+b.创建demo租户,用户,自动创建_member_角色并授权keystone user-create --name demo --tenant demo --pass demo --email demo@ostack.com+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |         demo@ostack.com          || enabled  |               True               ||    id    | f6895f5951fe4cdd81878195e15ef3f2 ||   name   |               demo               || tenantId | 9f77ec788b91475b869abba7f1091017 || username |               demo               |+----------+----------------------------------+

5. 创建服务租户,服务目录,API Endpoints

1)创建管理服务的租户信息keystone tenant-create --name service --description "Service Tenant"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |          Service Tenant          ||   enabled   |               True               ||      id     | ec2111f15290479f85567a93db505613 ||     name    |             service              |+-------------+----------------------------------+2)创建认证服务目录keystone service-create --name keystone --type identity --description "OpenStack Identity"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |        OpenStack Identity        ||   enabled   |               True               ||      id     | fd5d6decc3e04f798cb782c9a1d62deb ||     name    |             keystone             ||     type    |             identity             |+-------------+----------------------------------+3)创建Keystone API endpointkeystone endpoint-create --service-id $(keystone service-list | awk '/ identity / {print $2}') \                         --publicurl http://controller:5000/v2.0 \                         --internalurl http://controller:5000/v2.0 \                         --adminurl http://controller:35357/v2.0 \                         --region regionOne+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+|   adminurl  |   http://controller:35357/v2.0   ||      id     | c7f0c924de324b6ebbd9fd80ccf8f79f || internalurl |   http://controller:5000/v2.0    ||  publicurl  |   http://controller:5000/v2.0    ||    region   |            regionOne             ||  service_id | fd5d6decc3e04f798cb782c9a1d62deb |+-------------+----------------------------------+

6. 取消环境变量,配置admin用户,demo用户操作权限环境变量文件

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINTtouch adminrc.shcat adminrc.sh#!/bin/bashexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL=http://controller:35357/v2.0touch demorc.shcat demorc.shexport OS_TENANT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=demoexport OS_AUTH_URL=http://controller:35357/v2.0

7. 备份虚拟机镜像

a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_keystone.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第四节:镜像管理服务(Glance)部署配置、验证

OpenStack镜像服务可以为用户提供虚拟机镜像模板发现、注册和获取服务。镜像服务提供REST API为用户提供查询虚拟机镜像模板元数据、获取镜像服务

1.  准备工作

1)镜像服务数据库构建a. 使用MySQL客户端连接MySQL服务器mysql -u root –popenstackMariaDB [(none)]> b)创建Glance数据库CREATE DATABASE glance;c)为Glance数据库授权GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glanceDB';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glanceDB';d)退出数据库连接MariaDB [(none)]> exit2)配置加载admin用户操作环境变量a.配置admin用户操作环境变量文件adminrc.sh,该文件也应用到之后小节vi adminrc.shexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL=http://controller:35357/v2.0chmod +x adminrc.shb.加载admin操作环境变量source adminrc.sh3)在认证服务中创建Glance服务目录,API endpointsa.创建Glance用户keystone user-create --name glance --pass glance+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |                                  || enabled  |               True               ||    id    | fa7cd8bfd95e46878e147fa3f7715a20 ||   name   |              glance              || username |              glance              |+----------+----------------------------------+b.glance用户加入service租户,并授予admin角色keystone user-role-add --user glance --tenant service --role adminc.创建glance服务目录keystone service-create --name glance --type image --description "OpenStack Image Service"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |     OpenStack Image Service      ||   enabled   |               True               ||      id     | d1b7057820824320b8d44d4e5f7d1495 ||     name    |              glance              ||     type    |              image               |+-------------+----------------------------------+d.创建glance服务API endpointkeystone endpoint-create \--service-id $(keystone service-list | awk '/ image / {print $2}') \--publicurl http://controller:9292 \--internalurl http://controller:9292 \--adminurl http://controller:9292 \--region regionOne +-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+|   adminurl  |      http://controller:9292      ||      id     | f839ae2a4d8e44149f5800004fba50cf || internalurl |      http://controller:9292      ||  publicurl  |      http://controller:9292      ||    region   |            regionOne             ||  service_id | d1b7057820824320b8d44d4e5f7d1495 |+-------------+----------------------------------+

2.  部署配置Glance服务组件

1)安装相关软件包yum install openstack-glance python-glanceclient -y#备份原始配置文件mkdir /etc/glance/.oricp -r /etc/glance/* /etc/glance/.ori/2)编辑配置文件/etc/glance/glance-api.conf,完成如下操作a.编辑[database]部分,配置数据库访问权限[database] ...connection = mysql://glance:glanceDB@controller/glanceb.编辑[keystone_authtoken]和[paste_deploy]部分,配置认证权限[keystone_authtoken]...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = glanceadmin_password = glance[paste_deploy]...flavor = keystonec.编辑[default]和[glance_store]部分,配置存储后端以及存储路径[default]...default_store = file[glance_store]...filesystem_store_datadir = /var/lib/glance/images/d.[Optional]编辑[DEFAULT]部分,配置详细日志模式[DEFAULT]...verbose = True3)编辑配置文件/etc/glance/glance- registry.conf,完成如下操作a.编辑[database]部分,配置数据库访问权限[database]...connection = mysql://glance:glanceDB@controller/glanceb.编辑[keystone_authtoken]和[paste_deploy]部分,配置认证权限[keystone_authtoken]...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = glanceadmin_password = glance[paste_deploy]...flavor = keystonec.[Optional]编辑[DEFAULT]部分,配置详细日志模式[DEFAULT]...verbose = True4)执行生成glance数据库表文件su -s /bin/sh -c "glance-manage db_sync" glance
3.完成安装配置
1)配置自启动模式,启动服务systemctl enable openstack-glance-api.service openstack-glance-registry.servicesystemctl start openstack-glance-api.service openstack-glance-registry.servicesystemctl status openstack-glance-api.service openstack-glance-registry.service
4. 验证服务
1)上传镜像wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.imgsource adminrc.shglance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare  --is-public True --progress [=============================>] 100%+------------------+--------------------------------------+| Property         | Value                                |+------------------+--------------------------------------+| checksum         | 133eae9fb1c98f45894a4e60d8736619     || container_format | bare                                 || created_at       | 2015-01-09T07:59:19                  || deleted          | False                                || deleted_at       | None                                 || disk_format      | qcow2                                || id               | 162a09c8-3d83-4d8c-9e7d-e8410123ac22 || is_public        | True                                 || min_disk         | 0                                    || min_ram          | 0                                    || name             | cirros-0.3.3-x86_64                  || owner            | 7bf30749e195436c899d42dd879505bb     || protected        | False                                || size             | 13200896                             || status           | active                               || updated_at       | 2015-01-09T07:59:19                  || virtual_size     | None                                 |+------------------+--------------------------------------+2)查询镜像列表glance image-list+--------------------------------------+---------------------+-------------+------------------+----------+--------+| ID                                   | Name                | Disk Format | Container Format | Size     | Status |+--------------------------------------+---------------------+-------------+------------------+----------+--------+| 162a09c8- | cirros-0.3.3-x86_64 | qcow2       | bare             | 13200896 | active |+--------------------------------------+---------------------+-------------+------------------+----------+--------+
5. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_glance.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第五节:计算服务(Nova)部署配置、验证

OpenStack计算服务用于托管和管理云计算系统,其作为Openstack最重要的组件

1.  准备工作

1)创建Nova数据库a.使用mysql客户端,连接mysql数据库服务器mysql -u root -popenstackMariaDB [(none)]>b.创建nova数据库CREATE DATABASE nova;c.为nova数据库访问授权GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'novaDB';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novaDB';d.退出数据库连接MariaDB [(none)]> exit2)创建认证服务证书a.加载admin执行权限环境变量source adminrc.shb.创建nova服务用户keystone user-create --name nova --pass nova+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |                                  || enabled  |               True               ||    id    | e2ca6e3f2a834ec3b4617bc78a751757 ||   name   |               nova               || username |               nova               |+----------+----------------------------------+c.加入service租户,赋予admin角色keystone user-role-add --user nova --tenant service --role admind.创建nova服务目录keystone service-create --name nova --type compute --description "OpenStack Compute"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |        OpenStack Compute         ||   enabled   |               True               ||      id     | 6525556c0ae34617813e20279abdad7c ||     name    |               nova               ||     type    |             compute              |+-------------+----------------------------------+e.创建nova服务API endpointkeystone endpoint-create \                         --service-id $(keystone service-list | awk '/ compute / {print $2}') \                         --publicurl http://controller:8774/v2/%\(tenant_id\)s \                         --internalurl http://controller:8774/v2/%\(tenant_id\)s \                         --adminurl http://controller:8774/v2/%\(tenant_id\)s \                         --region regionOne +-------------+-----------------------------------------+|   Property  |                  Value                  |+-------------+-----------------------------------------+|   adminurl  | http://controller:8774/v2/%(tenant_id)s ||      id     |     0b755ab6cb31443a9a1ed49fd45f07d2    || internalurl | http://controller:8774/v2/%(tenant_id)s ||  publicurl  | http://controller:8774/v2/%(tenant_id)s ||    region   |                regionOne                ||  service_id |     6525556c0ae34617813e20279abdad7c    |+-------------+-----------------------------------------+

2 . 部署配置计算服务Nova相关组件

1)安装控制服务相关软件包yum install openstack-nova-api openstack-nova-cert \openstack-nova-conductor openstack-nova-console \openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y2)安装计算服务相关软件包(一般计算节点需要安装的软件包)yum install openstack-nova-compute sysfsutils -y3)编辑配置文件/etc/nova.conf,完成如下配置a.编辑[database]部分,配置数据库访问权限,[database]部分如果没有,添加[database]...connection = mysql://nova:novaDB@controller/novab.编辑[DEFAULT]部分,配置消息队列访问权限[DEFAULT]...rpc_backend = rabbitrabbit_host = controllerrabbit_password = guestc.编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限[DEFAULT]...auth_strategy = keystone[keystone_authtoken]...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = novaadmin_password = novad.编辑[DEFAULT]部分,配置my_ip参数项为控制节点管理IP地址[DEFAULT]...my_ip = 172.16.100.101e.编辑[DEFAULT]部分,配置VNC代理服务[DEFAULT]...vnc_enabled = Truevncserver_listen = 0.0.0.0vncserver_proxyclient_address = 172.16.100.101novncproxy_base_url = http:// 172.16.100.101:6080/vnc_auto.htmlf.编辑[glance]部分,配置镜像服务[glance]...host = controllerg.[Optional] 编辑[DEFAULT]部分,开启详细日志配置[DEFAULT]...verbose = True4)执行生成nova数据库表信息su -s /bin/sh -c "nova-manage db sync" nova

3.  完成安装配置

1)验证是否支持硬件虚拟化加速egrep -c '(vmx|svm)' /proc/cpuinfo0由于实验在虚拟机中进行,如上命令的返回结果为“0”,表示不支持,则需要进行如下配置[在物理环境中如果支持无须配置]vi /etc/nova/nova.conf[libvirt]...virt_type = qemu2)配置自启动,启动服务systemctl enable openstack-nova-api.service openstack-nova-cert.service \openstack-nova-consoleauth.service openstack-nova-scheduler.service \openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl start openstack-nova-api.service openstack-nova-cert.service \openstack-nova-consoleauth.service openstack-nova-scheduler.service \openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl status openstack-nova-api.service openstack-nova-cert.service \openstack-nova-consoleauth.service openstack-nova-scheduler.service \openstack-nova-conductor.service openstack-nova-novncproxy.servicesystemctl enable libvirtd.service openstack-nova-compute.servicesystemctl start libvirtd.service openstack-nova-compute.servicesystemctl status libvirtd.service openstack-nova-compute.service3)配置防火墙a.开启novncproxy服务端口touch /etc/firewalld/services/novncproxy.xmlvi /etc/firewalld/services/novncproxy.xml<?xml version="1.0" encoding="utf-8"?><service>  <short>Virtual Network Computing Proxy Server (NOVNC Proxy)</short>  <description>provide  NOVNC proxy server with direct access. </description>  <port protocol="tcp" port="6080"/></service>firewall-cmd --permanent --add-service=novncproxyfirewall-cmd --reloadb.开启vnc服务端口5900—5999vi /usr/lib/firewalld/services/vnc-server.xml<?xml version="1.0" encoding="utf-8"?><service>  <short>Virtual Network Computing Server (VNC)</short>  <description>A VNC server provides an external accessible X session. Enable this option if you plan to provide a VNC server with direct access. The access will be possible for displays :0 to :3. If you plan to provide access with SSH, do not open this option and use the via option of the VNC viewer.</description>  <port protocol="tcp" port="5900-5999"/></service>firewall-cmd --permanent --add-service=vnc-serverfirewall-cmd --reload

4.  验证服务

1)查看nova服务各日志文件,检查服务是否正常tail -f /var/log/nova/nova-api.logtail -f /var/log/nova/nova-scheduler.logtail -f /var/log/nova/nova-cert.logtail -f /var/log/nova/nova-conductor.logtail -f /var/log/nova/nova-manage.logtail -f /var/log/nova/nova-consoleauth.logtail -f /var/log/nova/nova-novncproxy.logtail -f /var/log/nova/nova-comopute.log2)加载admin用户执行权限环境变量source adminrc.sh3)列出nova服务组件,确认每个服务组件成功启动nova service-list+----+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+| Id | Binary           | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |+----+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+| 1  | nova-conductor   | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               || 2  | nova-consoleauth | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               || 3  | nova-cert        | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               || 4  | nova-scheduler   | controller.mydomain | internal | enabled | up    | 2015-01-09T09:51:33.000000 | -               || 5  | nova-compute     | controller.mydomain | nova     | enabled | up    | 2015-01-09T09:51:33.000000 | -               |+----+------------------+---------------------+----------+---------+-------+----------------------------+-----------------+4)验证认证、镜像服务连接是否正确nova image-list+--------------------------------------+---------------------+--------+--------+| ID                                   | Name                | Status | Server |+--------------------------------------+---------------------+--------+--------+| 162a09c8-3d83-4d8c-9e7d-e8410123ac22 | cirros-0.3.3-x86_64 | ACTIVE |        |+--------------------------------------+---------------------+--------+--------+
5. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_nova.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第六节:网络服务(Neutron,ml2+vlan模式)部署配置、验证

OpenStack网络服务允许创建和绑定由OpenStack其他服务管理的网络设备到neutron网络。采用插件机制提供架构和部署的灵活性。

1.  准备工作

1)创建网络服务数据库a.使用mysql客户端,连接mysql服务器mysql -u root –popenstackMariaDB [(none)]> b.创建neutron数据库CREATE DATABASE neutron;c.为neutron数据库授权GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutronDB';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutronDB';d.退出数据库连接MariaDB [(none)]> exit2)创建Neutron服务证书a.加载admin用户执行权限环境变量source adminrc.shb.创建neutron用户keystone user-create --name neutron --pass neutron+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |                                  || enabled  |               True               ||    id    | 765b558a83b84d2da07905444bb629b2 ||   name   |             neutron              || username |             neutron              |+----------+----------------------------------+c.加入service服务租户,并授予admin权限keystone user-role-add --user neutron --tenant service --role admind.创建neutron服务目录keystone service-create --name neutron --type network --description "OpenStack Networking"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |       OpenStack Networking       ||   enabled   |               True               ||      id     | 581b0cb476d74666957d9af50a264497 ||     name    |             neutron              ||     type    |             network              |+-------------+----------------------------------+e.创建neutron服务API endpoinkeystone endpoint-create \--service-id $(keystone service-list | awk '/ network / {print $2}') \--publicurl http://controller:9696 \--adminurl http://controller:9696 \--internalurl http://controller:9696 \--region regionOne+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+|   adminurl  |      http://controller:9696      ||      id     | c7b4e643fa844e6bbf98af6134330c7f || internalurl |      http://controller:9696      ||  publicurl  |      http://controller:9696      ||    region   |            regionOne             ||  service_id | 581b0cb476d74666957d9af50a264497 |+-------------+----------------------------------+

2  部署配置网络服务组件

1)指定内核网络参数a.编辑配置文件/etc/sysctl.conf,指定如下参数net.ipv4.ip_forward=1net.ipv4.conf.all.rp_filter=0net.ipv4.conf.default.rp_filter=0b.启动配置,使之生效sysctl -p#outputnet.ipv4.ip_forward = 1net.ipv4.conf.all.rp_filter = 0net.ipv4.conf.default.rp_filter = 02)安装相关组件软件包yum install openstack-neutron openstack-neutron-ml2 python-neutronclient  which openstack-neutron-openvswitch备份原始配置文件mkdir /etc/neutron/.oricp -r /etc/neutron/* /etc/neutron/.ori/3)编辑配置文件/etc/neutron/neutron.conf,完成如下操作a.编辑[database]部分,配置数据库访问权限[database]...connection = mysql://neutron:neutronDB@controller/neutronb.编辑[DEFAULT]部分,配置消息队列访问权限[DEFAULT]...rpc_backend = rabbitrabbit_host = controllerrabbit_password = guestc.编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限[DEFAULT]...auth_strategy = keystone[keystone_authtoken] ...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = neutronadmin_password = neutrond.编辑[DEFAULT]部分,配置实用ML2插件,路由服务,IP复用[DEFAULT]...core_plugin = ml2service_plugins = routerallow_overlapping_ips = Truee.编辑[DEFAULT]部分,配置网络服务通知计算服务网络拓扑变更[DEFAULT]...notify_nova_on_port_status_changes = Truenotify_nova_on_port_data_changes = Truenova_url = http://controller:8774/v2nova_admin_auth_url = http://controller:35357/v2.0nova_region_name = regionOnenova_admin_username = novanova_admin_tenant_id = ec2111f15290479f85567a93db505613nova_admin_password = novaf.[Optional]编辑[DEFAULT]部分,配置详细日志参数[DEFAULT]...verbose = True4)编辑配置文件/etc/neutron/plugins/ml2/ml2_conf.ini 完成如下操作a.编辑[ml2]部分,配置启用flat,Vlan网络类型驱动,Vlan租户网络和OVS机制驱动[ml2]...type_drivers = flat,vlantenant_network_types = vlanmechanism_drivers = openvswitchb.编辑[ml2_type_flat]部分,配置flat网络映射[ml2_type_flat]...flat_networks = externalc.编辑[ml2_type_vlan]部分,配置vlanID范围[ml2_type_vlan]...network_vlan_ranges = physnet1:10:1000d.编辑[securitygroup]部分,配置安全策略信息,启用安全策略,启用IPset,配置OV Siptables防火墙驱动[securitygroup]...enable_security_group = Trueenable_ipset = Truefirewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDrivere.编辑[ovs]部分,启用tunnels,配置local tunnel endpoint,映射外部Flat提供者网络到br-ex外部网络网桥[ovs]...network_vlan_ranges = physnet1:10:1000# tunnel_id_ranges =integration_bridge = br-intbridge_mappings = physnet1:br-srv,external:br-ex5)编辑配置文件/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini,完成如下操作a.编辑[ovs]部分,配置如下tenant_network_type = vlannetwork_vlan_ranges = physnet1:10:1000integration_bridge = br-intbridge_mappings = physnet1:br-srv,external:br-ex6)编辑配置文件/etc/neutron/l3_agent.ini文件完成如下操作a.编辑[DEFAULT]部分,配置驱动,启用network namespaces,配置外部网络网桥[DEFAULT]...interface_driver = neutron.agent.linux.interface.OVSInterfaceDriveruse_namespaces = Trueexternal_network_bridge = br-exb.[Optional]编辑[DEFAULT]部分,启用详细日志参数[DEFAULT]...verbose = True7)编辑配置文件/etc/neutron/dhcp_agent.ini 完成如下操作a.编辑[DEFAULT]部分,启用namespaces[DEFAULT]...interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverdhcp_driver = neutron.agent.linux.dhcp.Dnsmasquse_namespaces = Trueb.[Optional]编辑[DEFAULT]部分,启用详细日志参数[DEFAULT]...verbose = True8)编辑配置文件/etc/neutron/metadata_agent.ini,完成如下操作a.编辑[DEFAULT]部分,配置认证服务访问权限[DEFAULT]...auth_url = http://controller:5000/v2.0auth_region = regionOneadmin_tenant_name = serviceadmin_user = neutronadmin_password = neutronb.编辑[DEFAULT]部分,配置metadata主机地址[DEFAULT]...nova_metadata_ip = controllerc.编辑[DEFAULT]部分,配置metadata代理共享密钥[DEFAULT]...metadata_proxy_shared_secret = lkl_metadata_secretd.编辑[DEFAULT]部分,启用详细日志参数[DEFAULT]...verbose = True9)编辑配置文件/etc/nova/nova.conf, 更改为使用neutron服务提供网络, 完成如下操作a.编辑[DEFAULT]部分,配置APIs和驱动[DEFAULT]...network_api_class = nova.network.neutronv2.api.APIsecurity_group_api = neutronlinuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriverfirewall_driver = nova.virt.firewall.NoopFirewallDriverb.编辑[neutron]部分,配置nova网络API认证相关[neutron]...url = http://controller:9696auth_strategy = keystoneadmin_auth_url = http://controller:35357/v2.0admin_tenant_name = serviceadmin_username = neutronadmin_password = neutronc.编辑[neutron]部分,启用metadata代理,配置密钥[neutron]...service_metadata_proxy = Truemetadata_proxy_shared_secret = lkl_metadata_secret10)配置Open vSwitch(OVS)服务a.配置其自启动模式,启用OVS服务systemctl enable openvswitch.servicesystemctl start openvswitch.servicesystemctl start openvswitch.serviceb.添加网桥ovs-vsctl add-br br-exovs-vsctl add-br br-srvc.添加网卡端口到外部网桥,连接物理外部网络接口ovs-vsctl add-port br-ex eth1ovs-vsctl add-port br-srv eth2

3.  完成安装部署配置

1)配置软连接ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini2)解决安装包的bug问题:ovs代理初始化脚本明确查找open vswitch插件的配置文件,而不是软连接/etc/neutron/plugin.ini指向的ML2插件配置文件,需要运行如下命令解决这个问题cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \/usr/lib/systemd/system/neutron-openvswitch-agent.service.origsed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \/usr/lib/systemd/system/neutron-openvswitch-agent.service3)执行生成neutron数据表信息su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron4)配置自启动模式,启动服务systemctl enable neutron-server.servicesystemctl start neutron-server.servicesystemctl status neutron-server.servicesystemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.servicesystemctl start neutron-l3-agent.servicesystemctl start neutron-dhcp-agent.servicesystemctl start neutron-metadata-agent.servicesystemctl start openvswitch.servicesystemctl start neutron-openvswitch-agent.servicesystemctl status neutron-openvswitch-agent.service neutron-l3-agent.service \neutron-dhcp-agent.service neutron-metadata-agent.service5)重启nova相关服务systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service

4.  验证服务

1)加载admin用户操作权限环境变量source adminrc.sh2)列出加载的扩展,验证neutron-server服务是否正确启动neutron ext-list+-----------------------+-----------------------------------------------+| alias                 | name                                          |+-----------------------+-----------------------------------------------+| security-group        | security-group                                || l3_agent_scheduler    | L3 Agent Scheduler                            || ext-gw-mode           | Neutron L3 Configurable external gateway mode || binding               | Port Binding                                  || provider              | Provider Network                              || agent                 | agent                                         || quotas                | Quota management support                      || dhcp_agent_scheduler  | DHCP Agent Scheduler                          || l3-ha                 | HA Router extension                           || multi-provider        | Multi Provider Network                        || external-net          | Neutron external network                      || router                | Neutron L3 Router                             || allowed-address-pairs | Allowed Address Pairs                         || extraroute            | Neutron Extra Route                           || extra_dhcp_opt        | Neutron Extra DHCP opts                       || dvr                   | Distributed Virtual Router                    |+-----------------------+-----------------------------------------------+3)列出Agents,验证neutron agents服务正常启动neutron agent-list+--------------------------------------+--------------------+---------------------+-------+----------------+---------------------------+| id                                   | agent_type         | host                | alive | admin_state_up | binary                    |+--------------------------------------+--------------------+---------------------+-------+----------------+---------------------------+| 3b71393a-f9e6-40d0-bed3-3998c07356de | Metadata agent     | controller.mydomain | :-)   | True           | neutron-metadata-agent    || c4d6fb91-73ce-416f-8eaa-9d74362ada48 | L3 agent           | controller.mydomain | :-)   | True           | neutron-l3-agent          || d41eca3c-cdb2-4105-99aa-b30599af3cf7 | Open vSwitch agent | controller.mydomain | :-)   | True           | neutron-openvswitch-agent || ef9c4470-c878-4424-9bf0-362c990a5397 | DHCP agent         | controller.mydomain | :-)   | True           | neutron-dhcp-agent        |+--------------------------------------+--------------------+---------------------+-------+----------------+---------------------------+

5.  初始化网络服务数据

1)  加载admin用户操作权限环境变量source adminrc.sh2)创建外部网络neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flatCreated a new network:+---------------------------+--------------------------------------+| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | True                                 || id                        | 89d5b94f-05f9-4b36-b561-67164f1560ea || name                      | ext-net                              || provider:network_type     | flat                                 || provider:physical_network | external                             || provider:segmentation_id  |                                      || router:external           | True                                 || shared                    | True                                 || status                    | ACTIVE                               || subnets                   |                                      || tenant_id                 | 7bf30749e195436c899d42dd879505bb     |+---------------------------+--------------------------------------+3)为外部网络创建子网neutron subnet-create ext-net --name ext-subnet \--allocation-pool start=172.16.100.200,end=172.16.100.250 \--disable-dhcp --gateway 172.16.100.10 172.16.100.0/24Created a new subnet:+-------------------+-----------------------------------------------------+| Field             | Value                                                |+-------------------+-----------------------------------------------------+| allocation_pools  | {"start": "172.16.100.200", "end": "172.16.100.250"} || cidr              | 172.16.100.0/24                                      || dns_nameservers   |                                                      || enable_dhcp       | False                                                || gateway_ip        | 172.16.100.10                                        || host_routes       |                                                      || id                | 2f05c3c1-918a-4933-8fa7-61f2cfad85db                 || ip_version        | 4                                                    || ipv6_address_mode |                                                      || ipv6_ra_mode      |                                                      || name              | ext-subnet                                           || network_id        | 89d5b94f-05f9-4b36-b561-67164f1560ea                 || tenant_id         | 7bf30749e195436c899d42dd879505bb                     |+-------------------+-----------------------------------------------------+4)创建租户网络a.加载demo用户操作权限环境变量source demorc.shb.创建租户网络neutron net-create demo-netCreated a new network:+-----------------+--------------------------------------+| Field           | Value                                |+-----------------+--------------------------------------+| admin_state_up  | True                                 || id              | 8319e8cb-acd9-4674-b71b-f966df667b03 || name            | demo-net                             || router:external | False                                || shared          | False                                || status          | ACTIVE                               || subnets         |                                      || tenant_id       | 9f77ec788b91475b869abba7f1091017     |+-----------------+--------------------------------------+c.为租户网络创建子网neutron subnet-create demo-net --name demo-subnet --gateway 192.168.100.1 192.168.100.0/24Created a new subnet:+-------------------+-----------------------------------------------------+| Field             | Value                                                |+-------------------+-----------------------------------------------------+| allocation_pools  | {"start": "192.168.100.2", "end": "192.168.100.254"} || cidr              | 192.168.100.0/24                                     || dns_nameservers   |                                                      || enable_dhcp       | True                                                 || gateway_ip        | 192.168.100.1                                        || host_routes       |                                                      || id                | 713e676b-3cda-4b0e-9aae-81af5a544d3d                 || ip_version        | 4                                                    || ipv6_address_mode |                                                      || ipv6_ra_mode      |                                                      || name              | demo-subnet                                          || network_id        | 8319e8cb-acd9-4674-b71b-f966df667b03                 || tenant_id         | 9f77ec788b91475b869abba7f1091017                     |+-------------------+-----------------------------------------------------+5)创建路由绑定外部网络,租户网络,执行环境demosource demorc.sha.创建路由routerneutron router-create demo-routerCreated a new router:+-----------------------+--------------------------------------+| Field                 | Value                                |+-----------------------+--------------------------------------+| admin_state_up        | True                                 || external_gateway_info |                                      || id                    | 490f50cb-0663-4bbc-bb0a-cecf8e27916b || name                  | demo-router                          || routes                |                                      || status                | ACTIVE                               || tenant_id             | 9f77ec788b91475b869abba7f1091017     |+-----------------------+--------------------------------------+b.绑定demo租户网络subnet到路由上neutron router-interface-add demo-router demo-subnet<output>Added interface d8d52434-e016-4771-b46c-99f0a7adc6b2 to router demo-router.c.通过设置网关,绑定外部网络到路由上neutron router-gateway-set demo-router ext-net<output>Set gateway for router demo-router6)验证网络连接是否联通路由绑定外部网络产生的gateway是172.16.100.200,可以ping该地址验证连通性ping 172.16.100.200
6. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_neutron.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第七节:块存储服务(Cinder)部署配置、验证

OpenStack块存储服务使用多种不同后端为虚拟机实例提供块存储设备。一般部署模式:块存储API服务组件和调度服务组件运行在控制节点;卷服务运行在一个或多个存储节点。存储节点通过适合的驱动使用本地块存储设备或者SAN/NAS后端为虚拟机实例提供卷设备。本部署测试采用AllInOne单节点部署模式。

1.  准备工作

1)创建块存储服务数据库a.使用mysql客户端连接mysql数据库服务器mysql -uroot –popenstackMariaDB [(none)]>b.创建cinder数据库CREATE DATABASE cinder;c.为cinder数据库访问授权GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinderDB';GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinderDB';d.退出数据库连接MariaDB [(none)]> exit2)创建cinder服务证书a.加载admin用户执行权限环境变量 source adminrc.shb.创建cinder用户keystone user-create --name cinder --pass cinder+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |                                  || enabled  |               True               ||    id    | 6dc8206a69bf44cf8ac17c777c6b968c ||   name   |              cinder              || username |              cinder              |+----------+----------------------------------+c.Cinder用户授权角色和绑定租户keystone user-role-add --user cinder --tenant service --role admind.创建cinder服务目录keystone service-create --name cinder --type volume \--description "OpenStack Block Storage"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |     OpenStack Block Storage      ||   enabled   |               True               ||      id     | 8468edcc54654fe4ba9467afd46dbea7 ||     name    |              cinder              ||     type    |              volume              |+-------------+----------------------------------+keystone service-create --name cinderv2 --type volumev2 \--description "OpenStack Block Storage"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |     OpenStack Block Storage      ||   enabled   |               True               ||      id     | 9f59cfbd82014106b10040ed2f86b697 ||     name    |             cinderv2             ||     type    |             volumev2             |+-------------+----------------------------------+e.创建cinder API endpointskeystone endpoint-create \--service-id $(keystone service-list | awk '/ volume / {print $2}') \--publicurl http://controller:8776/v1/%\(tenant_id\)s \--internalurl http://controller:8776/v1/%\(tenant_id\)s \--adminurl http://controller:8776/v1/%\(tenant_id\)s \--region regionOne+-------------+-----------------------------------------+|   Property  |                  Value                  |+-------------+-----------------------------------------+|   adminurl  | http://controller:8776/v1/%(tenant_id)s ||      id     |     e5845fd20ec646779f6708cc4bc8d26d    || internalurl | http://controller:8776/v1/%(tenant_id)s ||  publicurl  | http://controller:8776/v1/%(tenant_id)s ||    region   |                regionOne                ||  service_id |     8468edcc54654fe4ba9467afd46dbea7    |+-------------+-----------------------------------------+keystone endpoint-create \--service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \--publicurl http://controller:8776/v2/%\(tenant_id\)s \--internalurl http://controller:8776/v2/%\(tenant_id\)s \--adminurl http://controller:8776/v2/%\(tenant_id\)s \--region regionOne+-------------+-----------------------------------------+|   Property  |                  Value                  |+-------------+-----------------------------------------+|   adminurl  | http://controller:8776/v2/%(tenant_id)s ||      id     |     242544375fbf4d808407f7bde17ecd0e    || internalurl | http://controller:8776/v2/%(tenant_id)s ||  publicurl  | http://controller:8776/v2/%(tenant_id)s ||    region   |                regionOne                ||  service_id |     9f59cfbd82014106b10040ed2f86b697    |+-------------+-----------------------------------------+

2.  创建LVM(PV/VG)卷管理组

1)安装LVM软件包yum install lvm2 #默认已经安装2)配置lvmmetadata自启动模式,启动服务systemctl enable lvm2-lvmetad.servicesystemctl start lvm2-lvmetad.servicesystemctl status lvm2-lvmetad.service3)创建物理卷,卷组a.创建物理卷pvcreate /dev/vda3b.创建VG(LVM卷组),块存储服务使用该VG创建块lvvgcreate cinder-volumes /dev/vda3c.编辑文件/etc/lvm/lvm.conf,在devices部分增加过滤条件接受/dev/vda设备devices {...filter = [ "a/sdb/", "r/.*/"]

3.  部署、配置块存储服务组件

1)  安装相关组件软件包yum install openstack-cinder python-cinderclient python-oslo-db -yyum install targetcli MySQL-python -y2)编辑配置文件/etc/cinder/cinder.conf,完成如下操作a.编辑[database]部分,配置数据库访问[database]...connection = mysql://cinder:cinderDB@controller/cinderb.编辑[DEFAULT]部分,配置RabbitMQ消息队列访问权限[DEFAULT]...rpc_backend = rabbitrabbit_host = controllerrabbit_password = guestc.编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限[DEFAULT]...auth_strategy = keystone[keystone_authtoken]...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = cinderadmin_password = cinderd.编辑[DEFAULT]部分,使用控制节点的管理IP地址配置my_ip参数项[DEFAULT]...my_ip = 172.16.100.101e.编辑[DEFAULT]部分,配置镜像服务位置[DEFAULT]...glance_host = controllerf.编辑[DEFAULT]部分,配置块存储使用lioadm iSCSI服务[DEFAULT]...iscsi_helper = lioadmg.[Optional]编辑[DEFAULT] 部分,开启详细日志参数[DEFAULT]...verbose = True

4.  完成部署配置

1)执行生成块存储数据库数据表su -s /bin/sh -c "cinder-manage db sync" cinder2)配置自启动模式,启动服务systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.servicesystemctl start openstack-cinder-api.service openstack-cinder-scheduler.servicesystemctl status openstack-cinder-api.service openstack-cinder-scheduler.service3)编辑配置文件/etc/cinder/cinder.conf,启用多后端配置a.编辑[DEFAULT]部分,启用LVM后端,多后端用逗号分割[DEFAULT]...enabled_backends=LVMISCSIb.在文件最后添加相应后端配置[LVMISCSI]volume_group=cinder-volumesvolume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver# volume_backend_name项名称可以随意取值,在后面配置中需要用到volume_backend_name=LVMc.启动openstack-cinder-volume服务systemctl enable openstack-cinder-volume.service target.servicesystemctl start openstack-cinder-volume.service target.servicesystemctl status openstack-cinder-volume.service target.serviced.创建相应volume-typesource adminrc.sh#创建volume类型,名称可以随意(见名知意即可)cinder type-create lvmiscsi +--------------------------------------+----------+|                  ID                  |   Name   |+--------------------------------------+----------+| f2d3f0bc-d9c7-4576-b75a-975ca1c993b6 | lvmiscsi |+--------------------------------------+----------+#为volume类型关联存储后端,值为前面配置文件volume_backend_name的值cinder type-key f2d3f0bc-d9c7-4576-b75a-975ca1c993b6 set volume_backend_name= LVMno outputcinder extra-specs-list+--------------------------------------+----------+----------------------------------+|                  ID                  |   Name   |           extra_specs            |+--------------------------------------+----------+----------------------------------+| f2d3f0bc-d9c7-4576-b75a-975ca1c993b6 | lvmiscsi | {u'volume_backend_name': u'LVM'} |+--------------------------------------+----------+----------------------------------+

5.  验证服务

1)使用admin用户验证服务状态a.加载admin用户执行权限环境变量source adminrc.shb.列出服务组件,验证各进程是否成功启动cinder service-list+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+|      Binary      |             Host             | Zone |  Status | State |         Updated_at         | Disabled Reason |+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler |     controller.mydomain      | nova | enabled |   up  | 2015-01-13T07:44:39.000000 |       None      ||  cinder-volume   | controller.mydomain@LVMISCSI | nova | enabled |   up  | 2015-01-13T07:44:46.000000 |       None      |+------------------+------------------------------+------+---------+-------+----------------------------+-----------------+2)使用demo用户执行块存储设备操作a.加载demo用户执行权限环境变量source demorc.shb.创建1GB大小块存储设备cinder create --display-name demo-volume1 1+---------------------+--------------------------------------+|       Property      |                Value                 |+---------------------+--------------------------------------+|     attachments     |                  []                  ||  availability_zone  |                 nova                 ||       bootable      |                false                 ||      created_at     |      2015-01-13T07:47:27.827415      || display_description |                 None                 ||     display_name    |             demo-volume1             ||      encrypted      |                False                 ||          id         | cd31322b-e221-4690-b5bc-48e02be90314 ||       metadata      |                  {}                  ||         size        |                  1                   ||     snapshot_id     |                 None                 ||     source_volid    |                 None                 ||        status       |               creating               ||     volume_type     |                 None                 |+---------------------+--------------------------------------+c.查询新创建的volume状态cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| cd31322b-e221-4690-b5bc-48e02be90314 | available | demo-volume1 |  1   |     None    |  false   |             |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
6. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_cinder.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第八节:WEBUI服务(Horizon)部署配置、验证

1.  安装配置Horizon相关组件

1)安装软件包yum install openstack-dashboard httpd mod_wsgi memcached python-memcached -y2)编辑配置文件/etc/openstack-dashboard/local_settings,完成如下操作a.配置Dashboard使用controller节点上Openstack各服务. . .OPENSTACK_HOST = "controller"b.许可所有主机访问Dashboard服务. . .ALLOWED_HOSTS = ['*']c.配置memcached session存储CACHES = {'default': {'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache','LOCATION': '127.0.0.1:11211',}}//注释掉其他session存储服务d.[Optianal]配置时区TIME_ZONE = "Asia/Shanghai"
2. 完成部署配置
1)CentOS/REDHAT系统配置SELinux许可web服务器连接Openstack服务setsebool -P httpd_can_network_connect on2)解决安装包bug,dashboard CSSfile加载失败问题解决chown -R apache:apache /usr/share/openstack-dashboard/static3)配置自启动模式,启动服务systemctl enable httpd.service memcached.servicesystemctl start httpd.service memcached.service
3. 配置防火墙
1)设置防火墙,开启80端口http服务a.即时开启firewall-cmd --add-service=httpb.写入配置文件firewall-cmd --permanent --add-service=httpc.查看/etc/firewalld/zones/public.xml<?xml version="1.0" encoding="utf-8"?><zone>  <short>Public</short>  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>  <service name="dhcpv6-client"/><service name="http"/><service name="vnc-server"/><service name="novnc-proxy"/>  <service name="ssh"/>  <service name="https"/></zone>

4.  验证服务

1)浏览器访问Horizon创建云主机http://172.16.100.101/dashboard
创建虚拟机


查看虚拟机IP地址/novnc方式查看,顺便验证下novnc登录是否正常


其他验证操作<略>
6. 备份虚拟机镜像

a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_horizon.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第九节:计量服务(Ceilometer)部署配置、验证

1.  准备工作

安装测量模块前,需要安装mongoDB,创建MongoDB数据库,服务证书,APIendpoints

1)安装MongoDB数据yum install mongodb-server mongodb -y2)编辑配置文件/etc/mongodb.conf,完成如下操作a.配置bind_ip,使用节点管理IP地址bind_ip = 172.16.100.101b.配置mongoDB journal文件大小,测试目的使用小文件甚至可以禁用smallfiles = truec.配置自启动模式,启动服务systemctl enable mongod.servicesystemctl start mongod.servicesystemctl status mongod.service3)创建ceilometer数据库mongo --host controller --eval 'db = db.getSiblingDB("ceilometer");db.createUser({user: "ceilometer",pwd: "ceilometerDB",roles: [ "readWrite", "dbAdmin" ]})'4)创建ceilometer服务证书a.加载admin用户执行权限环境变量source adminrc.shb.创建ceilometer账户keystone user-create --name ceilometer --pass ceilometer+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |                                  || enabled  |               True               ||    id    | 4499d16eda084811848f7709c0d990f1 ||   name   |            ceilometer            || username |            ceilometer            |+----------+----------------------------------+c.为ceilometer账户授权和管理管理租户servicekeystone user-role-add --user ceilometer --tenant service  --role adminno outputd.创建ceilometer服务目录keystone service-create --name ceilometer --type metering --description "Telemetry"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |            Telemetry             ||   enabled   |               True               ||      id     | 98e23755704a4565a29d7a3058e6d811 ||     name    |            ceilometer            ||     type    |             metering             |+-------------+----------------------------------+e.为服务目录创建API Endpointskeystone endpoint-create \                        --service-id $(keystone service-list | awk '/ metering / {print $2}') \                        --publicurl http://controller:8777 \                        --internalurl http://controller:8777 \                        --adminurl http://controller:8777 \                        --region regionOne+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+|   adminurl  |      http://controller:8777      ||      id     | 272ad78761454d72b09fc2f32b0b85bd || internalurl |      http://controller:8777      ||  publicurl  |      http://controller:8777      ||    region   |            regionOne             ||  service_id | 98e23755704a4565a29d7a3058e6d811 |+-------------+----------------------------------+
2. 部署配置测量组件
1)安装组件软件包yum install openstack-ceilometer-api openstack-ceilometer-collector \openstack-ceilometer-notification openstack-ceilometer-central \openstack-ceilometer-alarm  python-ceilometerclient openstack-ceilometer-compute python-pecan 备份原始配置文件<略>2)编辑配置文件/etc/ceilometer/ceilometer.conf,完成如下操作准备测量服务密钥,密钥可以是随机的一串字符,测试采用一串见名知意的字符 openstack_metering_keya.编辑[databse]部分,配置数据库访问权限[database]...connection = mongodb://ceilometer:ceilometerDB@controller:27017/ceilometerb.编辑[DEFAULT]部分,配置消息队列访问权限[DEFAULT]...rpc_backend = rabbitrabbit_host = controllerrabbit_password = guestc.编辑[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问权限[DEFAULT]...auth_strategy = keystone[keystone_authtoken]...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = ceilometeradmin_password = ceilometerd.编辑[service_credentials]部分,配置服务证书[service_credentials]...os_auth_url = http://controller:5000/v2.0os_username = ceilometeros_tenant_name = serviceos_password = ceilometeros_endpoint_type = internalURLe.编辑[publisher]部分,配置测量密钥[publisher]...metering_secret = lkl_metering_key
3.完成部署配置
1)配置自启动模式,启动服务systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service \openstack-ceilometer-central.service openstack-ceilometer-collector.service \openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.servicesystemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service \openstack-ceilometer-central.service openstack-ceilometer-collector.service \openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.servicesystemctl status openstack-ceilometer-api.service openstack-ceilometer-notification.service \openstack-ceilometer-central.service openstack-ceilometer-collector.service \openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

4  部署配置测量服务计算服务代理

1)编辑配置文件/etc/nova/nova.conf,完成如下操作a.编辑[DEFAULT]部分添加如下配置项[DEFAULT]...instance_usage_audit = Trueinstance_usage_audit_period = hournotify_on_state_change = vm_and_task_statenotification_driver = nova.openstack.common.notifier.rpc_notifiernotification_driver = ceilometer.compute.nova_notifierb.重启计算节点计算服务systemctl restart openstack-nova-compute.service systemctl status openstack-nova-compute.servicec.配置自启动模式,启动服务systemctl enable openstack-ceilometer-compute.servicesystemctl start openstack-ceilometer-compute.servicesystemctl status openstack-ceilometer-compute.service
5. 部署配置镜像服务测量服务功能
1)编辑配置文件/etc/glance/glance-api.conf,完成如下操作a.编辑[DEFAULT]部分,配置发送通知到消息队列[DEFAULT]...notification_driver = messagingrpc_backend = rabbitrabbit_host = controllerrabbit_password = guestb.重启相关服务,使配置生效systemctl restart openstack-glance-api.service openstack-glance-registry.servicesystemctl status openstack-glance-api.service openstack-glance-registry.service

6.  部署配置块存储服务测量服务功能

1)编辑配置文件/etc/cinder/cinder.conf,完成如下操作a.编辑[DEFAULT]部分,配置通知机制[DEFAULT]...control_exchange = cindernotification_driver = cinder.openstack.common.notifier.rpc_notifierb.重启相关服务,使配置生效systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.servicesystemctl restart openstack-cinder-volume.service
7. 验证服务
1)通过glance服务验证测量服务是否部署正确a.执行ceilometer meter-list命令ceilometer meter-listb.从glance服务下载镜像glance image-download "cirros-0.3.3-x86_64" > cirros.imgc.执行ceilometer命令获取验证下载已经被探测并存储ceilometer meter-list略d.可以获取各种测量的用例统计ceilometer statistics -m image.download -p 60略
8. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_ceilometer.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第十节:编配服务(Heat)部署配置、验证

1. 准备工作
1)创建heat数据库a.使用mysql客户端连接mysql数据库服务器mysql -u root –popenstackMariaDB [(none)]>b.创建heat数据库CREATE DATABASE heat;c.为heat数据库访问授权GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'heatDB';GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'heatDB';d.退出数据库连接MariaDB [(none)]> exit2)创建heat服务证书,API endpointsa.加载admin用户操作权限环境变量source adminrc.shb.创建heat用户keystone user-create --name heat --pass heat+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|  email   |                                  || enabled  |               True               ||    id    | 9c49258dbb834f83aa9c471abceaf215 ||   name   |               heat               || username |               heat               |+----------+----------------------------------+c.为heat用户授权,关联租户servicekeystone user-role-add --user heat --tenant service --role adminno outputd.创建heat_stack_owner角色keystone role-create --name heat_stack_owner+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|    id    | 5cd2f0ba44a84816a3bf0c6d5f1a207e ||   name   |         heat_stack_owner         |+----------+----------------------------------+e.为demo用户/租户授权heat_stack_owner角色keystone user-role-add --user demo --tenant demo --role heat_stack_ownerno output//必须添加该角色到管理stacks的用户f.创建heat_stack_user角色keystone role-create --name heat_stack_user+----------+----------------------------------+| Property |              Value               |+----------+----------------------------------+|    id    | a00a1ae74d3345b7bb4e77086b520c74 ||   name   |         heat_stack_user          |+----------+----------------------------------+g.创建heat和heat-cfn服务目录keystone service-create --name heat --type orchestration --description "Orchestration"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |          Orchestration           ||   enabled   |               True               ||      id     | cc84a959575447d89cb7e1a00624ab9a ||     name    |               heat               ||     type    |          orchestration           |+-------------+----------------------------------+keystone service-create --name heat-cfn --type cloudformation \--description "Orchestration"+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+| description |          Orchestration           ||   enabled   |               True               ||      id     | 359a38bba2884de2be77c6bc78329ec6 ||     name    |             heat-cfn             ||     type    |          cloudformation          |+-------------+----------------------------------+h.创建heat 服务API endpointskeystone endpoint-create \                        --service-id $(keystone service-list | awk '/ orchestration / {print $2}') \                        --publicurl http://controller:8004/v1/%\(tenant_id\)s \                        --internalurl http://controller:8004/v1/%\(tenant_id\)s \                        --adminurl http://controller:8004/v1/%\(tenant_id\)s \                        --region regionOne+-------------+-----------------------------------------+|   Property  |                  Value                  |+-------------+-----------------------------------------+|   adminurl  | http://controller:8004/v1/%(tenant_id)s ||      id     |     f7071efad7f54911b242a0565895ebe8    || internalurl | http://controller:8004/v1/%(tenant_id)s ||  publicurl  | http://controller:8004/v1/%(tenant_id)s ||    region   |                regionOne                ||  service_id |     cc84a959575447d89cb7e1a00624ab9a    |+-------------+-----------------------------------------+keystone endpoint-create \                        --service-id $(keystone service-list | awk '/ cloudformation / {print $2}') \                        --publicurl http://controller:8000/v1 \                        --internalurl http://controller:8000/v1 \                        --adminurl http://controller:8000/v1 \                        --region regionOne+-------------+----------------------------------+|   Property  |              Value               |+-------------+----------------------------------+|   adminurl  |    http://controller:8000/v1     ||      id     | 8c8ade0a1dde411cbdfcefbb3ce935c5 || internalurl |    http://controller:8000/v1     ||  publicurl  |    http://controller:8000/v1     ||    region   |            regionOne             ||  service_id | 359a38bba2884de2be77c6bc78329ec6 |+-------------+----------------------------------+
2. 部署配置Heat服务组件
1)安装相关组件软件包yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \python-heatclient -y备份原始配置文件<略>2)编辑配置文件/etc/heat/heat.conf,完成如下操作a.编辑[database]部分,配置数据库访问权限[database]...connection = mysql://heat:heatDB@controller/heatb.编辑[DEFAULT]部分,配置消息队列访问权限[DEFAULT]...rpc_backend = heat.openstack.common.rpc.impl_komburabbit_host = controllerrabbit_password = guestc.编辑[keystone_authtoken]和 [ec2authtoken]部分,配置认证服务访问权限[keystone_authtoken]...auth_uri = http://controller:5000/v2.0identity_uri = http://controller:35357admin_tenant_name = serviceadmin_user = heatadmin_password = heat[ec2authtoken]...auth_uri = http://controller:5000/v2.0d.编辑[DEFAULT]部分,配置metadata、wait condition URLs[DEFAULT]...heat_metadata_server_url = http://controller:8000heat_waitcondition_server_url = http://controller:8000/v1/waitconditione.[Optional]编辑[DEFAULT]部分,配置通知机制notification_driver=heat.openstack.common.notifier.rpc_notifierf.[Optional]编辑[DEFAULT]部分,启用详细日志参数[DEFAULT]...verbose = True
3. 完成安装部署配置
1)执行生成heat数据库数据表su -s /bin/sh -c "heat-manage db_sync" heat2)配置自启动模式,启动服务systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service \openstack-heat-engine.servicesystemctl start openstack-heat-api.service openstack-heat-api-cfn.service \openstack-heat-engine.servicesystemctl status openstack-heat-api.service openstack-heat-api-cfn.service \openstack-heat-engine.service
4. 验证服务
1)加载demo用户执行权限环境变量source demorc.sh2)创建测试模板test-stack.ymlheat_template_version: 2013-05-23description: Test Templateparameters:  ImageID:    type: string    description: Image use to boot a server  NetID:    type: string    description: Network ID for the server resources:  server1:    type: OS::Nova::Server    properties:      name: "Test server"      image: { get_param: ImageID }      flavor: "m1.tiny"      networks:      - network: { get_param: NetID } outputs:  server1_private_ip:    description: IP address of the server in the private network    value: { get_attr: [ server1, first_address ] }3)使用模板创建stackheat stack-create -f test_stack.yml \-P "ImageID=cirros-0.3.3-x86_64;NetID=$NET_ID" testStack+--------------------------------------+------------+--------------------+----------------------+| id                                   | stack_name | stack_status       | creation_time        |+--------------------------------------+------------+--------------------+----------------------+| ea158cfe-2d70-4f75-9600-8a46f6b2b2ee | testStack  | CREATE_IN_PROGRESS | 2015-01-14T03:06:35Z |+--------------------------------------+------------+--------------------+----------------------+4)验证stack创建是否成功heat stack-list+--------------------------------------+------------+-----------------+----------------------+| id                                   | stack_name | stack_status    | creation_time        |+--------------------------------------+------------+-----------------+----------------------+| ea158cfe-2d70-4f75-9600-8a46f6b2b2ee | testStack  | CREATE_COMPLETE | 2015-01-14T03:06:35Z |+--------------------------------------+------------+-----------------+----------------------+nova list+--------------------------------------+-------------+--------+------------+-------------+------------------------+| ID                                   | Name        | Status | Task State | Power State | Networks               |+--------------------------------------+-------------+--------+------------+-------------+------------------------+| 96ea3757-4d8c-4606-8759-f07f27980c94 | Test server | ACTIVE | -          | Running     | demo-net=192.168.100.9 |+--------------------------------------+-------------+--------+------------+-------------+------------------------+
5. 备份虚拟机镜像
a. 关闭虚拟机shutdown -h now #或者virsh destroy Junob. 备份虚拟机cp juno.img juno_heat.imgc. 启动虚拟机,为下一节部署做准备virsh start juno

第十一节:总结

以上即为本次测试安装部署配置全部过程,希望能对关注者有所帮助。大家都知道OpenStack Juno版本有11个核心组件之多,本次测试部署了其中的8个核心组件,没有测试的组件包括对象存储服务组件Swift,数据库服务组件Trove,以及新加入的大数据服务组件Sahara。这三个组件部署测试将下后续的文章中补充。

本次测试的8个核心组件运行良好,但是也有不足之处:

  1. 本次测试没有配置服务HA模式,包括OpenStack组件服务本身具有的HA模式,如dhcp-agent的active-active模式、l3-agent的active-passive模式等;
  2. glance的后端采用的是file模式,且提供给glance存储的设备为系统磁盘下的一个目录,不具有生产性,可以考虑采用其他存储模式或安全方案;
  3. cinder的存储后端采用的是LVM模式,可以考虑多种存储后端,将在随后的文章集成Ceph分布式存储系统作为cinder的其中一个的后端
  4. nova的存储后端也是采用的本地磁盘目录,nova存储后端可以考虑分布式文件系统如GlusterFS,NF或者采用统一存储系统Ceph

在文档的开始部分提到开启yum缓存软件包的模式,采用缓存下来的软件包制作自定义OpenStack Juno版YUM安装源,以利于在没有Internet网络的环境下安装配置OpenStack环境,节省流量,加快安装效率,固定安装版本。下一篇博文将描述如何在CentOS7上为CentOS7系统制作Openstack Juno版YUM安装源

0 0
原创粉丝点击