在centOS7.3中安装ocata Openstack

来源:互联网 发布:腾讯足球数据统计 编辑:程序博客网 时间:2024/06/05 10:26

                   centOS 7.3中部署Openstack

一,环境

    centOS 7.3中部署Openstack,按照官网只需要控制节点和计算节点,网络节点安装和控制节点安装在一起。安装过程中一直出现的问题以及解决方法用红色已经标志出来了

官网:https://docs.openstack.org/project-install-guide/ocata/rdo-services.html

(1)网络

     控制节点和计算节点都需要两个网络接口,一个作为管理网络接口,一个作为外部网络接口。接口配置如下:

控制节点:管理网络   IP地址10.0.0.11

                     子网掩码 255.255.255.0

                     默认网关 10.0.0.1

          外部网络   IP地址10.190.16.40

                     子网掩码 255.255.255.0

                     默认网关 10.190.16.1

 

计算节点:管理网络   IP地址10.0.0.31

                     子网掩码 255.255.255.0

                     默认网关 10.0.0.1

          外部网络   IP地址10.190.16.41

                     子网掩码 255.255.255.0

                     默认网关 10.190.16.1

(2)网络时间协议(NTP)

控制节点:          

      # yum install chrony

1,配置/etc/chrony.conf文件,按照你环境的要求,对下面的键进行添加,修改或者删除:

    server NTP_SERVER iburst

    使用NTP服务器的主机名或者IP地址替换NTP_SERVER。配置支持设置多个server值。

2为了允许其他节点可以连接到控制节点的 chrony后台进程,“/etc/chrony.conf ”文件添加下面的键:   

    allow 10.0.0.0/24

3,启动 NTP服务并将其配置为随系统启动:

    # systemctl enable chronyd.service

# systemctl start chronyd.service

计算节点:

      # yum install chrony

1,编辑``/etc/chrony.conf``文件并注释除controller``server``值外的所有”server”内容。修改它引用控制节点:

   server controller iburst

2启动 NTP服务并将其配置为随系统启动:   

   # systemctl enable chronyd.service

   # systemctl start chronyd.service

验证操作:

1,在控制节点上执行这个命令

    chronyc sources

 

     Name/IP address列的内容应显示NTP服务器的主机名或者IP地址。在S列的内容应该在NTP服务目前同步的上游服务器前显示*

2,在所有其他节点执行相同命令

   chronyc sources

 

Name/IP address列的内容应显示控制节点的主机名。

(3)OpenStack包(所有节点)

1,安装包

  # yum install centos-release-openstack-ocata

2,安装 OpenStack客户端:

  # yum install python-openstackclient

3RHELCentOS默认启用了 SELinux .安装openstack-selinux软件包以便自动管理OpenStack服务的安全策略:

  # yum install openstack-selinux

(4)SQL数据库(控制节点)

1,安装包

  # yum install mariadb mariadb-server python2-PyMySQL

2,创建并编辑 /etc/my.cnf.d/openstack.cnf

    [mysqld]

    bind-address = 10.0.0.11

    default-storage-engine = innodb

    innodb_file_per_table = on

    max_connections = 4096

    collation-server = utf8_general_ci

character-set-server = utf8

3,启动数据库服务,并将其配置为开机自启:

   # systemctl enable mariadb.service

   # systemctl start mariadb.service

4,为了保证数据库服务的安全性,运行``mysql_secure_installation``脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码。

   # mysql_secure_installation

(5)消息队列(控制节点)

1,安装包:

   # yum install rabbitmq-server

2,启动消息队列服务并将其配置为随系统启动:

   # systemctl enable rabbitmq-server.service

   # systemctl start rabbitmq-server.service

3,添加 openstack用户:

  # rabbitmqctl add_user openstack RABBIT_PASS

4,给``openstack``用户配置写和读权限:

   # rabbitmqctl set_permissions openstack ".*" ".*" ".*"

(6)Memcached(控制节点)

1,安装软件包:

# yum install memcached python-memcached

2,配置 /etc/sysconfig/memcached

   OPTIONS="-l 127.0.0.1,::1" 修改127.0.0.1为控制节点的管理IP 10.0.0.11

   后面没有controller,要不后面网页打不开

3,启动Memcached服务,并且配置它随机启动。

   # systemctl enable memcached.service

   # systemctl start memcached.service

 

二,认证服务(keystone)

1)安装并配置

1,创建一个数据库和管理员令牌

mysql -u root -p

创建 keystone数据库:

CREATE DATABASE keystone;

 

``keystone``数据库授予恰当的权限:

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \

  IDENTIFIED BY 'KEYSTONE_DBPASS';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

  IDENTIFIED BY 'KEYSTONE_DBPASS';

 

2,安装包

   # yum install openstack-keystone httpd mod_wsgi

3,配置/etc/keystone/keystone.conf(在文件前面直接添加下面配置)

   [database]

   # ...

   connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

 

   [token]

   # ...

   provider = fernet

4,初始化身份认证服务的数据库:

   # su -s /bin/sh -c "keystone-manage db_sync" keystone

5,初始化Fernet keys

   # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

   # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

6Bootstrap身份认证服务

   # keystone-manage bootstrap --bootstrap-password admin\

    --bootstrap-admin-url http://controller:35357/v3/ \

    --bootstrap-internal-url http://controller:5000/v3/ \

    --bootstrap-public-url http://controller:5000/v3/ \

--bootstrap-region-id RegionOne

替换ADMIN_PASS为一个适当的密码,这里创建了一个admin用户,密码ADMIN_PASS

7,配置 Apache HTTP服务,编辑/etc/httpd/conf/httpd.conf

   ServerName controller

   创建一个软链接:

   # ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

8,启动 Apache HTTP服务并配置其随系统启动:

   # systemctl enable httpd.service

   # systemctl start httpd.service

   在浏览器输入外部网络IP,看是否进入Apache服务器,如果进不去,可能防火墙限制了80端口,开启80端口即可。

9,创建环境变量

   export OS_USERNAME=admin

   export OS_PASSWORD=admin

   export OS_PROJECT_NAME=admin

   export OS_USER_DOMAIN_NAME=Default

   export OS_PROJECT_DOMAIN_NAME=Default

   export OS_AUTH_URL=http://controller:35357/v3

   export OS_IDENTITY_API_VERSION=3

替换ADMIN_PASS6中设置的密码。

(2)创建域、项目、用户和角色

1,创建service项目

   $ openstack project create --domain default \

     --description "Service Project" service

2,创建demo项目

   $ openstack project create --domain default \

     --description "Demo Project" demo

3,创建demo用户

   $ openstack user create --domain default \

     --password-prompt demo

4,创建user角色

#openstack role create user

5,添加``admin``角色到admin项目和用户上:

   $ openstack role add --project demo --user demo user

后面项目、用户和角色创建不再解释说明,直接创建。

(3)验证操作

1,因为安全性的原因,关闭临时认证令牌机制:

   编辑 /etc/keystone/keystone-paste.ini文件,从``[pipeline:public_api]``,  [pipeline:admin_api]````[pipeline:api_v3]``部分删除``admin_token_auth

2,重置``OS_TOKEN````OS_URL``环境变量:

   $ unset OS_TOKEN OS_URL

3,作为 admin用户,请求认证令牌:

   $ openstack --os-auth-url http://controller:35357/v3 \

    --os-project-domain-name default --os-user-domain-name default \

--os-project-name admin --os-username admin token issue

作为demo用户,请求认证令牌:

$ openstack --os-auth-url http://controller:5000/v3 \

 --os-project-domain-name default --os-user-domain-name default \

 --os-project-name demo --os-username demo token issue

(4)创建 OpenStack客户端环境脚本

1,编辑文件 admin-openrc并添加如下内容(替换ADMIN_PASS为(26中设置的密码):

   

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN_PASS

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

2,编辑文件 demo-openrc并添加如下内容(替换DEMO_PASSdemo用户密码):

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=DEMO_PASS

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

3,使用脚本

   $ . admin-openrc

   $ openstack token issue

三,镜像服务(glance)

(1)创建数据库

1,mysql -u root -p

2CREATE DATABASE glance;

3GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \

    IDENTIFIED BY 'GLANCE_DBPASS';

    GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \

IDENTIFIED BY 'GLANCE_DBPASS';

(2)创建服务证书和镜像服务的 API端点

1$ . admin-openrc

   $ openstack user create --domain default --password-prompt glance

   $ openstack role add --project service --user glance admin

   $ openstack service create --name glance \

     --description "OpenStack Image" image

2,创建镜像服务的 API端点

   $ openstack endpoint create --region RegionOne \

     image public http://controller:9292

 

   $ openstack endpoint create --region RegionOne \

     image internal http://controller:9292

   

   $ openstack endpoint create --region RegionOne \

     image admin http://controller:9292

3)安装配置

1,安装包

   # yum install openstack-glance

2,配置/etc/glance/glance-api.conf(直接添加到最前面就可以)

[database]

# ...

connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = GLANCE_PASS

#修改GLANCE_PASSglance用户密码

[paste_deploy]

# ...

flavor = keystone

 

[glance_store]

# ...

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

3,配置/etc/glance/glance-registry.conf

[database]

# ...

connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = GLANCE_PASS

#GLANCE_PASSglance用户密码。

 

[paste_deploy]

# ...

flavor = keystone

 

4,写入镜像服务数据库:

# su -s /bin/sh -c "glance-manage db_sync" glance

5,完成安装,启动镜像服务、配置他们随机启动:

  # systemctl enable openstack-glance-api.service \

   openstack-glance-registry.service

  # systemctl start openstack-glance-api.service \

   openstack-glance-registry.service

(4)验证操作

 

1$ . admin-openrc

2,下载一个镜像:

   $ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

3,使用 QCOW2磁盘格式,bare容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它:

   $ openstack image create "cirros" \

    --file cirros-0.3.5-x86_64-disk.img \

    --disk-format qcow2 --container-format bare \

--public

4,确认镜像的上传并验证属性:

   $ openstack image list

四,计算服务(nova)

先安装控制节点,按照官网教程一步一步走。

(1)创建数据库

1mysql -u root -p

CREATE DATABASE nova_api;

CREATE DATABASE nova;

CREATE DATABASE nova_cell0;

 GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \

  IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \

  IDENTIFIED BY 'NOVA_DBPASS';

 

 GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \

  IDENTIFIED BY 'NOVA_DBPASS';

 GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \

  IDENTIFIED BY 'NOVA_DBPASS';

 

 

 GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \

  IDENTIFIED BY 'NOVA_DBPASS';

 GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \

  IDENTIFIED BY 'NOVA_DBPASS';

 

2,创建服务证书、创建 Compute服务API端点

    . admin-openrc

    $ openstack user create --domain default --password-prompt nova

    $ openstack role add --project service --user nova admin

    $openstack service create --name nova --description "OpenStack Compute" compute

    $ openstack endpoint create --region RegionOne \

     compute public http://controller:8774/v2.1

    $ openstack endpoint create --region RegionOne \

      compute internal http://controller:8774/v2.1

    $ openstack endpoint create --region RegionOne \

      compute admin http://controller:8774/v2.1

 

    $ openstack user create --domain default --password-prompt placement

    $ openstack role add --project service --user placement admin

    $ openstack service create --name placement --description "Placement API" placement

    $ openstack endpoint create --region RegionOne placement public \

      http://controller/placement

    $ openstack endpoint create --region RegionOne placement internal \

      http://controller/placement

    $ openstack endpoint create --region RegionOne placement admin \

      http://controller/placement

(2)安装并配置控制节点

     1,安装包

        

     # yum install openstack-nova-api openstack-nova-conductor \

       openstack-nova-console openstack-nova-novncproxy \

      openstack-nova-scheduler openstack-nova-placement-api

  2,配置/etc/nova/nova.conf,在原有配置增加如下配置:

     

[DEFAULT]

# ...

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:RABBIT_PASS@controller

my_ip = 10.0.0.11

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

 

[api_database]

# ...

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

 

[database]

# ...

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

 

[api]

# ...

auth_strategy = keystone

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova

 

[vnc]

enabled = true

# ...

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

 

[glance]

# ...

api_servers = http://controller:9292

 

[oslo_concurrency]

# ...

lock_path = /var/lib/nova/tmp

 

[placement]

# ...

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = placement

 

2,配置 /etc/httpd/conf.d/00-nova-placement-api.conf,添加如下配置

  <Directory /usr/bin>

    <IfVersion >= 2.4>

       Require all granted

    </IfVersion>

    <IfVersion < 2.4>

       Order allow,deny

       Allow from all

    </IfVersion>

  </Directory>

3,同步nova-api数据库

   # su -s /bin/sh -c "nova-manage api_db sync" nova

4,注册cell0数据库

   # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

5,创建cell11 cell

   # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

6,同步nova数据库

   # su -s /bin/sh -c "nova-manage db sync" nova

7,验证cell0cell1正确注册

   # nova-manage cell_v2 list_cells

   

8,完成安装

   # systemctl enable openstack-nova-api.service \

     openstack-nova-consoleauth.service openstack-nova-scheduler.service \

     openstack-nova-conductor.service openstack-nova-novncproxy.service

    # systemctl start openstack-nova-api.service \

     openstack-nova-consoleauth.service openstack-nova-scheduler.service \

     openstack-nova-conductor.service openstack-nova-novncproxy.service

3)安装并配置计算节点

1,安装包

   # yum install openstack-nova-compute

2,配置 /etc/nova/nova.conf

   [DEFAULT]

   enabled_apis = osapi_compute,metadata

   transport_url = rabbit://openstack:RABBIT_PASS@controller

   my_ip = 10.0.0.31

   use_neutron = True

   firewall_driver = nova.virt.firewall.NoopFirewallDriver

   instance_usage_audit = True

   instance_usage_audit_period = hour

   notify_on_state_change = vm_and_task_state

   notification_driver = nova.openstack.common.notifier.rpc_notifier

   notification_driver = ceilometer.compute.nova_notifier

 

   [vnc]

   enabled = True

   vncserver_listen = 0.0.0.0

   vncserver_proxyclient_address = 10.0.0.31

   novncproxy_base_url = http://controller:6080/vnc_auto.html

 

   [api]

   auth_strategy = keystone

 

   [keystone_authtoken]

   auth_uri = http://controller:5000

   auth_url = http://controller:35357

   memcached_servers = controller:11211

   auth_type = password

   project_domain_name = default

   user_domain_name = default

   project_name = service

   username = nova

   password = nova

 

   [glance]

   api_servers = http://controller:9292

 

   [oslo_concurrency]

   lock_path = /var/lib/nova/tmp

 

   [placement]

   os_region_name = RegionOne

   project_domain_name = Default

   project_name = service

   auth_type = password

   user_domain_name = Default

   auth_url = http://controller:35357/v3

   username = placement

   password = placement

3,确定计算节点是否支持虚拟机的硬件加速。

   $ egrep -c '(vmx|svm)' /proc/cpuinfo

   返回结果小于1,在/etc/nova/nova.conf中添加virt_type = qemu

4,完成安装

   # systemctl enable libvirtd.service openstack-nova-compute.service

   # systemctl start libvirtd.service openstack-nova-compute.service

   可能启动的时候一直启动不了,等五六分钟没反应,查看日志发现5672 is unreachable

   解决方法:启用iptables,在rabbitmq server端加入如下规则,开放rabbitmq端口(5672),

   允许其他主机访问rabbitmq server

    #iptables -I INPUT -p tcp --dport 5672 -j ACCEPT        #添加规则

    #service iptables save                             #保存设置

    #service iptables restart                           #重启iptables,生效规则

(4)添加计算节点到cell数据库(控制节点中操作)

   Placement API not response问题:

关闭防火墙:systemctl stop firewalld.service

iptables -F

iptables -L -n -v

iptables -I INPUT -p tcp --dport 8778 -j ACCEPT

iptables -I OUTPUT -p tcp --dport 8778 -j ACCEPT

iptables -L -n -v

/etc/init.d/httpd status

iptables -A OUTPUT -p tcp --sport 80 -j ACCEPT

iptables -A INPUT -p tcp --dport 80 -j ACCEPT

iptables -L

service httpd restart(主要是这个)

   重启所有计算相关服务

1,确认计算节点主机在数据库中

   $ . admin-openrc

   $ openstack hypervisor list   

   

2,发现计算节点主机

   # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

(5)验证操作

      $ . admin-openrc

      $ openstack compute service list

      

五,网络服务(neutron)

1)创建数据库

     mysql -u root -p

CREATE DATABASE neutron;

 

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \

  IDENTIFIED BY 'NEUTRON_DBPASS';

 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \

  IDENTIFIED BY 'NEUTRON_DBPASS';

2)创建neutron用户,服务和API端口等

     $ openstack user create --domain default --password-prompt neutron

     $ openstack role add --project service --user neutron admin

     $ openstack service create --name neutron \

       --description "OpenStack Networking" network

     

     $ openstack endpoint create --region RegionOne \

      network public http://controller:9696

     $ openstack endpoint create --region RegionOne \

      network internal http://controller:9696    

     $ openstack endpoint create --region RegionOne \

      network admin http://controller:9696

3)安装并配置控制节点网络

     选择私有网络,同样支持实例连接到公共网络。

1,安装包

   # yum install openstack-neutron openstack-neutron-ml2 \

     openstack-neutron-openvswitch ebtables

     (换成ovs,官网用的是linuxbridge

 

2,配置/etc/neutron/neutron.conf

 [database]

# ...

connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[DEFAULT]

# ...

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

 

[DEFAULT]

# ...

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

 

[nova]

# ...

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = nova

 

[oslo_concurrency]

# ...

lock_path = /var/lib/neutron/tmp

3,配置/etc/neutron/plugins/ml2/openvswitch_agent.ini

[agent]部分添加

  tunnel_types = vxlan

  l2_population = True

 

[ovs]部分添加:

local_ip = 10.0.0.11

bridge_mappings = external:br-ex

 配置完后一定要创建网桥br-exovs-vsctl add-br br-ex

(如果创建失败,启动ovsdb数据库sudo /usr/share/openvswitch/scripts/ovs-ctl start

ps  aux | grep openvswitch

cd /etc/openvswitch/

 ll

mv conf.db conf.db.bk

/bin/systemctl stop  openvswitch.service

/bin/systemctl stop  ovsdb-server

ps  aux | grep openvswitch

 kill -9 35506(杀掉所有和ovs有关的进程)

mv conf.db conf.db.bk

/bin/systemctl start ovsdb-server openvswitch.serviceOK

 

 

4,配置/etc/neutron/plugins/ml2/ml2_conf.ini   

[ml2]

# ...

type_drivers = flat,vlan,vxlan,gre

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

 

[ml2_type_flat]

# ...

flat_networks = external

#这里用defaultprovider还是external还不确定,后面好像网络有问题配成external解决问题的

[ml2_type_vxlan]

# ...

vni_ranges = 1:1000

 

[securitygroup]

# ...

enable_ipset = true

 

5,配置 /etc/neutron/l3_agent.ini

  interface_driver = openvswitch

 

6,配置/etc/neutron/dhcp_agent.ini

   [DEFAULT]

   interface_driver = openvswitch

   dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

   enable_isolated_metadata = true

 

7,配置/etc/neutron/metadata_agent.ini

   [DEFAULT]

   # ...

   nova_metadata_ip = controller

   metadata_proxy_shared_secret = METADATA_SECRET

 

8,配置/etc/nova/nova.conf,在里面添加如下配置:

   [neutron]

   # ...

   url = http://controller:9696

   auth_url = http://controller:35357

   auth_type = password

   project_domain_name = default

   user_domain_name = default

   region_name = RegionOne

   project_name = service

   username = neutron

   password = neutron

   service_metadata_proxy = true

   metadata_proxy_shared_secret = METADATA_SECRET

 

9,网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:

   # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

10,同步数据库

   # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

     --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

11,重启计算API服务:

   # systemctl restart openstack-nova-api.service

12,当系统启动时,启动 Networking服务并配置它启动:   

#systemctl enable neutron-server.service \

  neutron-openvswitch-agent.service  neutron-dhcp-agent.service \

  neutron-metadata-agent.service  neutron-l3-agent.service

# systemctl start neutron-server.service \

  neutron-openvswitch-agent.service  neutron-dhcp-agent.service \

  neutron-metadata-agent.service  neutron-l3-agent.service

 

4)安装并配置计算节点网络

1,安装包

  # yum install openstack-neutron-openvswitch ebtables ipset

2,配置/etc/neutron/neutron.conf   

   [DEFAULT]

   # ...

   transport_url = rabbit://openstack:RABBIT_PASS@controller

   auth_strategy = keystone

 

   [keystone_authtoken]

   auth_uri = http://controller:5000

   auth_url = http://controller:35357

   memcached_servers = controller:11211

   auth_type = password

   project_domain_name = default

   user_domain_name = default

   project_name = service

   username = neutron

   password = neutron

 

 

   [oslo_concurrency]

   lock_path = /var/lib/neutron/tmp

3,配置/etc/neutron/plugins/ml2/openvswitch_agent.ini

   [agent]

   tunnel_types = gre,vxlan

   l2_population = True

 

   [ovs]

   local_ip = 10.0.0.31

   bridge_mappings =

 

   [securitygroup]

   firewall_driver = iptables_hybrid

4,配置/etc/nova/nova.conf,添加如下配置:

   [neutron]

   url = http://controller:9696

   auth_url = http://controller:35357

   auth_type = password

   project_domain_name = default

   user_domain_name = default

   region_name = RegionOne

   project_name = service

   username = neutron

   password = neutron

5,重启计算服务

   # systemctl restart openstack-nova-compute.service

6,启动ovs代理并配置它开机自启动:

   # systemctl enable neutron-openvswitch-agent.service

   # systemctl start neutron-openvswitch-agent.service

(5)验证操作

   $ . admin-openrc

   $ openstack network agent list

 

 

六,dashbord

1)安装并配置

1,安装包

   # yum install openstack-dashboard

2,配置/etc/openstack-dashboard/local_settings

#需要注意的地方红色标识出来了

# -*- coding: utf-8 -*-

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

 

CACHES = {

    'default': {

         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

         'LOCATION': '10.0.0.12:11211',

#换成IP,不要写成controller,这块好像识别不了

    }

}

#注释掉后面的CACHES模块

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

 

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

 

OPENSTACK_API_VERSIONS = {

    "identity": 3,

    "image": 2,

    "volume": 2,

}

 

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

 

OPENSTACK_NEUTRON_NETWORK = {

#...

#上面省略号注释掉,直接复制官网配置,没有注释出错

    'enable_router': False,

    'enable_quotas': False,

    'enable_distributed_router': False,

    'enable_ha_router': False,

    'enable_lb': False,

    'enable_firewall': False,

    'enable_vpn': False,

    'enable_fip_topology_check': False,

}

 

TIME_ZONE = "PRC"

import os

from django.utils.translation import ugettext_lazy as _

from openstack_dashboard.settings import HORIZON_CONFIG

 

DEBUG = True

#官网DEBUG = False,在浏览器验证时一直不通过,改成DEBUG = True通过验证

3重启web服务器以及会话存储服务

   # systemctl restart httpd.service memcached.service

2)验证操作

在浏览器输入:10.190.16.40/dashboard

     

七,问题补充

 1,在/etc/nova/nova.conf   service_token_roles_required=true(这是一个nova-api的一个waring不是error,不知道有没有什么影响,日志建议改成true

 2,实例控制台出现 Failed to connect to server (code: 1006)  计算节点配置问题

   1)计算节点处:用命令检查59005999端口是否被Iptables规则允许

   iptables -nL |grep 5900  

   iptables -nL |grep 5999

   如果没有

   iptables -I INPUT -p tcp --dport 5900 -j ACCEPT  

   iptables -I INPUT -p tcp --dport 5999 -j ACCEPT  

   2)在控制节点处

   iptables -nL |grep 6080  如果没有

   iptables -I INPUT -p tcp --dport 6088 -j ACCEPT  


 


原创粉丝点击