OpenStack完整安装手册(CentOS6.2)

来源:互联网 发布:产品运营中的数据分析 编辑:程序博客网 时间:2024/04/26 08:10

转载自百度文库。

======

OpenStack完整安装手册(CentOS6.2)

 

Table ofContents

·        1实验环境

·        2架构部署

3控制节点安装

o   3.1前提工作

o   3.2 NTP时钟服务安装

o   3.3 MYSQL数据库服务安装

o   3.4 RABBITMQ消息队列服务安装

o   3.5 PYTHON-NOVACLIENT库安装

o   3.6 KEYSTONE身份认证服务安装

o   3.7 PYTHON-KEYSTONECLIENT库安装

o   3.8 SWIFT对象存储服务安装

o   3.9 GLANCE镜像存储服务安装

o   3.10 NOVA计算服务安装

o   3.11 HORIZON管理面板安装

o   3.12 NOVNC WEB访问安装

o   3.13 KEYSTONE身份认证服务配置

o   3.14 GLANCE镜像存储服务配置

o   3.15建立GLANCE服务数据库

o   3.16 NOVA计算服务配置

o   3.17 SWIFT对象存储服务配置

o   3.18 HORIZON管理面板配置

o   3.19 NOVNC WEB访问配置

4计算节点安装

o   4.1前提工作

o   4.2 NTP时钟同步配置

o   4.3 PYTHON-NOVACLIENT库安装

o   4.4 GLANCE镜像存储服务安装

o   4.5 NOVA计算服务安装

o   4.6 NOVA计算服务配置

 

 

 

 

 

 

1 实验环境

·        硬件:
DELL R710(1台)

|------+--------------------------------------------------------|

| CPU  | Intel(R) Xeon(R) CPU E5620 @ 2.40GHz * 2                         |

|------+--------------------------------------------------------||MEM  | 48GB                                                               |

|------+--------------------------------------------------------|

| DISK |300GB                                                             |

|------+--------------------------------------------------------|

|NIC  | Broadcom Corporation NetXtreme II BCM5716Gigabit Ethernet * 4   |

|------+--------------------------------------------------------|

DELL R410(1台)

|------+--------------------------------------------------------|

| CPU  | Intel(R) Xeon(R) CPU E5606 @ 2.13GHz * 2                         |

|------+--------------------------------------------------------|

|MEM  | 8GB                                                                |

|------+--------------------------------------------------------|

| DISK | 1T * 4                                                            |

|------+--------------------------------------------------------|

|NIC  | Broadcom Corporation NetXtreme II BCM5709Gigabit Ethernet * 4   |

|------+--------------------------------------------------------|

·        系统:

CentOS6.2 x64

·        Openstack版本:

Essexrelease(2012.1)

2 架构部署

·        配置信息

|-------------------+---------------+-------------+-------------|

|Machine/Hostname  |   External IP  | Internal IP | Used for     |

|-------------------+---------------+-------------+-------------|

| DELLR410/Control | 60.12.206.105 | 192.168.1.2 | Control Node|

| DELLR710/Compute | 60.12.206.99  |192.168.1.3 | Compute Node|

|-------------------+---------------+-------------+-------------|

实例网段为10.0.0.0/24,Floating IP为60.12.206.110,实例网段桥接在内网网卡上,网络模式采用FlatDHCP
控制节点 /dev/sda为系统盘,/dev/sdb为nova-volume盘,/dev/sdc、/dev/sdd为swift存储用

·        服务器系统安装

1.CentOS 6.2 x64使用最小化安装方式

2. 服务器外网使用eth0

3. 服务器内网使用eth1

4. 所有服务均监听

3 控制节点安装

3.1前提工作

·        导入第三方软件源

rpm -Uvhhttp://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm

rpm -Uvhhttp://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

·        安装依赖包

yum -yinstall swig libvirt-python libvirt qemu-kvm python-pip gcc make gcc-c++ patchm4 python-devel libxml2-devel libxslt-devel libgsasl-devel openldap-develsqlite-devel openssl-devel wget telnet gpxe-bootimgs gpxe-roms gpxe-roms-qemudmidecode git scsi-target-utils kpartx socat vconfig aoetools

rpm -Uvhhttp://veillard.com/libvirt/6.3/x86_64/dnsmasq-utils-2.48-6.el6.x86_64.rpm

ln -sv/usr/bin/pip-python /usr/bin/pip

·        更新内核

通过uname -r 查看原内核版本,应如下:

2.6.32-220.el6.x86_64

yum -yinstall kernel kernel-devel

init 6

通过uname -r 查看更新后内核版本,应如下:

2.6.32-220.7.1.el6.x86_64

3.2 NTP时钟服务安装

·        安装NTP时钟同步服务器

yuminstall -y ntp

·        编辑/etc/ntp.conf,将文件内容替换为如下:

restrictdefault ignore

restrict127.0.0.1

restrict192.168.1.0 mask 255.255.255.0 nomodify notrap

serverntp.api.bz

server  127.127.1.0

fudge   127.127.1.0 stratum 10

driftfile/var/lib/ntp/drift

keys/etc/ntp/keys

·        重启ntp服务

/etc/init.d/ntpdstart

3.3 MYSQL数据库服务安装

·        安装MYSQL数据库服务

yuminstall -y mysql-server

·        更改MYSQL数据库服务监听内网网卡IP

sed -i'/symbolic-links=0/a bind-address = 192.168.1.2' /etc/my.cnf

·        启动MYSQL数据库服务

/etc/init.d/mysqldstart

·        设置MYSQL的root用户密码为openstack

mysqladmin-uroot password 'openstack';history -c

·        检测服务是否正常启动

通过netstat-ltunp查看是否有tcp 3306端口监听
如果没有正常启动请查看/var/log/mysqld.log文件排错

3.4 RABBITMQ消息队列服务安装

·        安装RABBITMQ消息队列服务

yum -yinstall rabbitmq-server

·        启动RABBITMQ消息队列服务

/etc/init.d/rabbitmq-serverstart

·        更改RABBITMQ消息队列服务guest用户默认密码为openstack

rabbitmqctlchange_password guest openstack

3.5 PYTHON-NOVACLIENT库安装

·        下载源码包

wgethttps://launchpad.net/nova/essex/2012.1/+download/python-novaclient-2012.1.tar.gz-P /opt

·        安装依赖包

yum -yinstall python-simplejson python-prettytable python-argparse python-nose1.1python-httplib2 python-virtualenv MySQL-python

·        解压并安装PYTHON-NOVACLIENT库

cd /opt

tar xfpython-novaclient-2012.1.tar.gz

cdpython-novaclient-2012.1

pythonsetup.py install

rm -f../python-novaclient-2012.1.tar.gz

3.6 KEYSTONE身份认证服务安装

·        下载源码包

wgethttps://launchpad.net/keystone/essex/2012.1/+download/keystone-2012.1.tar.gz -P/opt

·        安装依赖包

yuminstall -y python-eventlet python-greenlet python-paste python-passlib

pipinstall routes==1.12.3 lxml==2.3 pam==0.1.4 passlib sqlalchemy-migrate==0.7.2PasteDeploy==1.5.0 SQLAlchemy==0.7.3 WebOb==1.0.8

·        解压并安装KEYSTONE身份认证服务

cd /opt

tar xfkeystone-2012.1.tar.gz

cdkeystone-2012.1

pythonsetup.py install

rm -f../keystone-2012.1.tar.gz

3.7 PYTHON-KEYSTONECLIENT库安装

·        下载源码包

wget https://launchpad.net/keystone/essex/2012.1/+download/python-keystoneclient-2012.1.tar.gz-P /opt

·        解压并安装PYTHON-KEYSTONECLIENT库

cd /opt

tar xfpython-keystoneclient-2012.1.tar.gz

cdpython-keystoneclient-2012.1

pythonsetup.py install

rm -f../python-keystoneclient-2012.1.tar.gz

3.8 SWIFT对象存储服务安装

·        下载源码包

wgethttps://launchpad.net/swift/essex/1.4.8/+download/swift-1.4.8.tar.gz -P /opt

·        安装依赖包

pipinstall configobj==4.7.1 netifaces==0.6

·        解压并安装SWIFT对象存储服务

cd /opt

tar xfswift-1.4.8.tar.gz

cdswift-1.4.8

pythonsetup.py install

rm -f../swift-1.4.8.tar.gz

3.9 GLANCE镜像存储服务安装

·        下载源码包

wgethttps://launchpad.net/glance/essex/2012.1/+download/glance-2012.1.tar.gz -P/opt

·        安装依赖包

yuminstall -y python-anyjson python-kombu m2crypto

pipinstall xattr==0.6.0 iso8601==0.1.4 pysendfile==2.0.0 pycrypto==2.3 wsgirefboto==2.1.1

·        解压并安装GLANCE镜像存储服务

cd /opt

tar xfglance-2012.1.tar.gz

cdglance-2012.1

pythonsetup.py install

rm -f../glance-2012.1.tar.gz

3.10 NOVA计算服务安装

·        下载源码包

wgethttps://launchpad.net/nova/essex/2012.1/+download/nova-2012.1.tar.gz -P /opt

·        安装依赖包

yuminstall -y python-amqplib python-carrot python-lockfile python-gflagspython-netaddr python-suds python-paramiko python-feedparser

pipinstall Cheetah==2.4.4 python-daemon==1.5.5 Babel==0.9.6

·        解压并安装NOVA计算服务

cd /opt

tar xfnova-2012.1.tar.gz

cdnova-2012.1

pythonsetup.py install

rm -f../nova-2012.1.tar.gz

3.11 HORIZON管理面板安装

·        下载源码包

wgethttps://launchpad.net/horizon/essex/2012.1/+download/horizon-2012.1.tar.gz -P/opt

·        安装依赖包

yuminstall -y python-django-nose python-dateutil python-cloudfiles python-djangopython-django-integration-apache httpd

·        解压并安装HORIZON管理面板

cd /opt

tar xfhorizon-2012.1.tar.gz

cdhorizon-2012.1

pythonsetup.py install

rm -f../horizon-2012.1.tar.gz

3.12 NOVNC WEB访问安装

·        下载源码包

gitclone https://github.com/cloudbuilders/noVNC.git /opt/noVNC

·        安装依赖包

yuminstall  -y python-numdisplay

3.13 KEYSTONE身份认证服务配置

·        建立KEYSTONE服务数据库

mysql-uroot -popenstack -e 'create database keystone'

·        建立KEYSTONE服务配置文件存放目录

mkdir/etc/keystone

·        建立KEYSTONE服务启动用户

useradd-s /sbin/nologin -m -d /var/log/keystone keystone

·        在/etc/keystone建立default_catalog.templates作为KEYSTONE服务服务点配置文件,内容如下:

catalog.RegionOne.identity.publicURL= http://60.12.206.105:$(public_port)s/v2.0

catalog.RegionOne.identity.adminURL= http://60.12.206.105:$(admin_port)s/v2.0

catalog.RegionOne.identity.internalURL= http://60.12.206.105:$(public_port)s/v2.0

catalog.RegionOne.identity.name= Identity Service

 

catalog.RegionOne.compute.publicURL= http://60.12.206.105:8774/v2/$(tenant_id)s

catalog.RegionOne.compute.adminURL= http://60.12.206.105:8774/v2/$(tenant_id)s

catalog.RegionOne.compute.internalURL= http://60.12.206.105:8774/v2/$(tenant_id)s

catalog.RegionOne.compute.name= Compute Service

 

catalog.RegionOne.volume.publicURL= http://60.12.206.105:8776/v1/$(tenant_id)s

catalog.RegionOne.volume.adminURL= http://60.12.206.105:8776/v1/$(tenant_id)s

catalog.RegionOne.volume.internalURL= http://60.12.206.105:8776/v1/$(tenant_id)s

catalog.RegionOne.volume.name= Volume Service

 

catalog.RegionOne.ec2.publicURL= http://60.12.206.105:8773/services/Cloud

catalog.RegionOne.ec2.adminURL= http://60.12.206.105:8773/services/Admin

catalog.RegionOne.ec2.internalURL= http://60.12.206.105:8773/services/Cloud

catalog.RegionOne.ec2.name= EC2 Service

 

catalog.RegionOne.s3.publicURL= http://60.12.206.105:3333

catalog.RegionOne.s3.adminURL= http://60.12.206.105:3333

catalog.RegionOne.s3.internalURL= http://60.12.206.105:3333

catalog.RegionOne.s3.name= S3 Service

 

catalog.RegionOne.image.publicURL= http://60.12.206.105:9292/v1

catalog.RegionOne.image.adminURL= http://60.12.206.105:9292/v1

catalog.RegionOne.image.internalURL= http://60.12.206.105:9292/v1

catalog.RegionOne.image.name= Image Service

 

catalog.RegionOne.object_store.publicURL= http://60.12.206.105:8080/v1/AUTH_$(tenant_id)s

catalog.RegionOne.object_store.adminURL= http://60.12.206.105:8080/

catalog.RegionOne.object_store.internalURL= http://60.12.206.105:8080/v1/AUTH_$(tenant_id)s

catalog.RegionOne.object_store.name= Swift Service

·        在/etc/keystone建立policy.json作为KEYSTONE服务策略文件,内容如下:

{

    "admin_required":[["role:admin"], ["is_admin:1"]]

}

·        在/etc/keystone建立keystone.conf作为KEYSTONE服务配置文件,内容如下:

[DEFAULT]

public_port= 5000

admin_port= 35357

admin_token= ADMIN

compute_port= 8774

verbose= True

debug =True

log_file= /var/log/keystone/keystone.log

use_syslog= False

syslog_log_facility= LOG_LOCAL0

 

[sql]

connection= mysql://root:openstack@localhost/keystone

idle_timeout= 30

min_pool_size= 5

max_pool_size= 10

pool_timeout= 200

 

[identity]

driver =keystone.identity.backends.sql.Identity

 

[catalog]

driver =keystone.catalog.backends.templated.TemplatedCatalog

template_file= /etc/keystone/default_catalog.templates

 

[token]

driver =keystone.token.backends.kvs.Token

 

[policy]

driver =keystone.policy.backends.simple.SimpleMatch

 

[ec2]

driver =keystone.contrib.ec2.backends.sql.Ec2

 

[filter:debug]

paste.filter_factory= keystone.common.wsgi:Debug.factory

 

[filter:token_auth]

paste.filter_factory= keystone.middleware:TokenAuthMiddleware.factory

 

[filter:admin_token_auth]

paste.filter_factory= keystone.middleware:AdminTokenAuthMiddleware.factory

 

[filter:xml_body]

paste.filter_factory= keystone.middleware:XmlBodyMiddleware.factory

 

[filter:json_body]

paste.filter_factory= keystone.middleware:JsonBodyMiddleware.factory

 

[filter:crud_extension]

paste.filter_factory= keystone.contrib.admin_crud:CrudExtension.factory

 

[filter:ec2_extension]

paste.filter_factory= keystone.contrib.ec2:Ec2Extension.factory

 

[filter:s3_extension]

paste.filter_factory= keystone.contrib.s3:S3Extension.factory

 

[app:public_service]

paste.app_factory= keystone.service:public_app_factory

 

[app:admin_service]

paste.app_factory= keystone.service:admin_app_factory

 

[pipeline:public_api]

pipeline= token_auth admin_token_auth xml_body json_body debug ec2_extension s3_extensionpublic_service

 

[pipeline:admin_api]

pipeline= token_auth admin_token_auth xml_body json_body debug ec2_extensioncrud_extension admin_service

 

[app:public_version_service]

paste.app_factory= keystone.service:public_version_app_factory

 

[app:admin_version_service]

paste.app_factory= keystone.service:admin_version_app_factory

 

[pipeline:public_version_api]

pipeline= xml_body public_version_service

 

[pipeline:admin_version_api]

pipeline= xml_body admin_version_service

 

[composite:main]

use =egg:Paste#urlmap

/v2.0 =public_api

/ =public_version_api

 

[composite:admin]

use =egg:Paste#urlmap

/v2.0 =admin_api

/ =admin_version_api

·        在/etc/init.d/下建立名为keystone的KEYSTONE服务启动脚本,内容如下:

#!/bin/sh

#

#keystone  OpenStack Identity Service

#

#chkconfig:   - 20 80

#description: keystone works provide apis to \

#               * Authenticate users and providea token \

#               * Validate tokens

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

prog=keystone

prog_exec=keystone-all

exec="/usr/bin/$prog_exec"

config="/etc/$prog/$prog.conf"

pidfile="/var/run/$prog/$prog.pid"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/subsys/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user keystone --pidfile $pidfile"$exec --config-file=$config &>/dev/null & echo \$! >$pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

       ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        配置启动脚本:

chmod755 /etc/init.d/keystone

mkdir/var/run/keystone

mkdir/var/lock/keystone

chownkeystone:root /var/run/keystone

chownkeystone:root /var/lock/keystone

·        启动KEYSTONE服务

/etc/init.d/keystonestart

·        检测服务是否正常启动

通过netstat-ltunp查看是否有tcp 5000和tcp 35357端口监听
如果没有正常启动请查看/var/log/keystone/keystone.log文件排错

·        建立KEYSTONE服务初始化数据脚本keystone_data.sh,内容如下:

#!/bin/bash

#Variables set before calling this script:

#SERVICE_TOKEN - aka admin_token in keystone.conf

#SERVICE_ENDPOINT - local Keystone admin endpoint

#SERVICE_TENANT_NAME - name of tenant containing service accounts

#ENABLED_SERVICES - stack.sh's list of services to start

#DEVSTACK_DIR - Top-level DevStack directory

 

ADMIN_PASSWORD=${ADMIN_PASSWORD:-secrete}

SERVICE_PASSWORD=${SERVICE_PASSWORD:-service}

exportSERVICE_TOKEN=ADMIN

exportSERVICE_ENDPOINT=http://localhost:35357/v2.0

SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-tenant}

 

functionget_id () {

    echo `$@ | awk '/ id / { print $4 }'`

}

 

#Tenants

ADMIN_TENANT=$(get_idkeystone tenant-create --name=admin)

SERVICE_TENANT=$(get_idkeystone tenant-create --name=$SERVICE_TENANT_NAME)

DEMO_TENANT=$(get_idkeystone tenant-create --name=demo)

INVIS_TENANT=$(get_idkeystone tenant-create --name=invisible_to_admin)

 

# Users

ADMIN_USER=$(get_idkeystone user-create --name=admin \

                                        --pass="$ADMIN_PASSWORD" \

                                        --email=admin@example.com)

DEMO_USER=$(get_idkeystone user-create --name=demo \

                                       --pass="$ADMIN_PASSWORD" \

                                       --email=demo@example.com)

 

# Roles

ADMIN_ROLE=$(get_idkeystone role-create --name=admin)

KEYSTONEADMIN_ROLE=$(get_idkeystone role-create --name=KeystoneAdmin)

KEYSTONESERVICE_ROLE=$(get_idkeystone role-create --name=KeystoneServiceAdmin)

ANOTHER_ROLE=$(get_idkeystone role-create --name=anotherrole)

 

# AddRoles to Users in Tenants

keystoneuser-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $ADMIN_TENANT

keystoneuser-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $DEMO_TENANT

keystoneuser-role-add --user $DEMO_USER --role $ANOTHER_ROLE --tenant_id $DEMO_TENANT

 

#TODO(termie): these two might be dubious

keystoneuser-role-add --user $ADMIN_USER --role $KEYSTONEADMIN_ROLE --tenant_id$ADMIN_TENANT

keystoneuser-role-add --user $ADMIN_USER --role $KEYSTONESERVICE_ROLE --tenant_id$ADMIN_TENANT

 

# TheMember role is used by Horizon and Swift so we need to keep it:

MEMBER_ROLE=$(get_idkeystone role-create --name=Member)

keystoneuser-role-add --user $DEMO_USER --role $MEMBER_ROLE --tenant_id $DEMO_TENANT

keystoneuser-role-add --user $DEMO_USER --role $MEMBER_ROLE --tenant_id $INVIS_TENANT

 

NOVA_USER=$(get_idkeystone user-create --name=nova \

                                       --pass="$SERVICE_PASSWORD" \

                                       --tenant_id $SERVICE_TENANT \

                                       --email=nova@example.com)

keystoneuser-role-add --tenant_id $SERVICE_TENANT \

                       --user $NOVA_USER \

                       --role $ADMIN_ROLE

 

GLANCE_USER=$(get_idkeystone user-create --name=glance \

                                         --pass="$SERVICE_PASSWORD" \

                                          --tenant_id$SERVICE_TENANT \

                                         --email=glance@example.com)

keystoneuser-role-add --tenant_id $SERVICE_TENANT \

                       --user $GLANCE_USER \

                       --role $ADMIN_ROLE

 

SWIFT_USER=$(get_idkeystone user-create --name=swift \

                                        --pass="$SERVICE_PASSWORD" \

                                        --tenant_id $SERVICE_TENANT \

                                         --email=swift@example.com)

keystoneuser-role-add --tenant_id $SERVICE_TENANT \

                       --user $SWIFT_USER \

                       --role $ADMIN_ROLE

 

RESELLER_ROLE=$(get_idkeystone role-create --name=ResellerAdmin)

keystoneuser-role-add --tenant_id $SERVICE_TENANT \

                       --user $NOVA_USER \

                       --role $RESELLER_ROLE

·        建立KEYSTONE服务数据库结构

keystone-managedb_sync

·        执行初始化数据脚本

bashkeystone_data.sh

3.14 GLANCE镜像存储服务配置

3.15建立GLANCE服务数据库

mysql-uroot -popenstack -e 'create database glance'

·        建立GLANCE服务配置文件存放目录

mkdir/etc/glance

·        建立GLANCE服务启动用户

useradd-s /sbin/nologin -m -d /var/log/glance glance

·        在/etc/glance建立glance-api.conf作为GLANCE-API服务配置文件,内容如下:

[DEFAULT]

# Showmore verbose log output (sets INFO log level output)

verbose= True

 

# Showdebugging output in logs (sets DEBUG log level output)

debug =True

 

# Whichbackend store should Glance use by default is not specified

# in arequest to add a new image to Glance? Default: 'file'

#Available choices are 'file', 'swift', and 's3'

default_store= file

 

#Address to bind the API server

bind_host= 0.0.0.0

 

# Portthe bind the API server to

bind_port= 9292

 

#Address to find the registry server

registry_host= 0.0.0.0

 

# Portthe registry server is listening on

registry_port= 9191

 

# Log tothis file. Make sure you do not set the same log

# filefor both the API and registry servers!

log_file= /var/log/glance/api.log

 

# Sendlogs to syslog (/dev/log) instead of to file specified by `log_file`

use_syslog= False

 

#============ Notification System Options =====================

 

#Notifications can be sent when images are create, updated or deleted.

# Thereare three methods of sending notifications, logging (via the

#log_file directive), rabbit (via a rabbitmq queue) or noop (no

#notifications sent, the default)

notifier_strategy= noop

 

#Configuration options if sending notifications via rabbitmq (these are

# thedefaults)

rabbit_host= localhost

rabbit_port= 5672

rabbit_use_ssl= false

rabbit_userid= guest

rabbit_password= openstack

rabbit_virtual_host= /

rabbit_notification_topic= glance_notifications

 

#============ Filesystem Store Options ========================

 

#Directory that the Filesystem backend store

# writesimage data to

filesystem_store_datadir= /var/lib/glance/images/

 

#============ Swift Store Options =============================

 

#Address where the Swift authentication service lives

swift_store_auth_address= 127.0.0.1:8080/v1.0/

 

# Userto authenticate against the Swift authentication service

swift_store_user= jdoe

 

# Authkey for the user authenticating against the

# Swiftauthentication service

swift_store_key= a86850deb2742ec3cb41518e26aa2d89

 

#Container within the account that the account should use

# forstoring images in Swift

swift_store_container= glance

 

# Do wecreate the container if it does not exist?

swift_store_create_container_on_put= False

 

# Whatsize, in MB, should Glance start chunking image files

# and doa large object manifest in Swift? By default, this is

# themaximum object size in Swift, which is 5GB

swift_store_large_object_size= 5120

 

# Whendoing a large object manifest, what size, in MB, should

# Glancewrite chunks to Swift? This amount of data is written

# to atemporary disk buffer during the process of chunking

# theimage file, and the default is 200MB

swift_store_large_object_chunk_size= 200

 

#Whether to use ServiceNET to communicate with the Swift storage servers.

# (Ifyou aren't RACKSPACE, leave this False!)

#

# To useServiceNET for authentication, prefix hostname of

#`swift_store_auth_address` with 'snet-'.

# Ex.https://example.com/v1.0/ -> https://snet-example.com/v1.0/

swift_enable_snet= False

 

#============ S3 Store Options =============================

 

#Address where the S3 authentication service lives

s3_store_host= 127.0.0.1:8080/v1.0/

 

# Userto authenticate against the S3 authentication service

s3_store_access_key= <20-char AWS access key>

 

# Authkey for the user authenticating against the

# S3authentication service

s3_store_secret_key= <40-char AWS secret key>

 

#Container within the account that the account should use

# forstoring images in S3. Note that S3 has a flat namespace,

# so youneed a unique bucket name for your glance images. An

# easyway to do this is append your AWS access key to "glance".

# S3buckets in AWS *must* be lowercased, so remember to lowercase

# yourAWS access key if you use it in your bucket name below!

s3_store_bucket= <lowercased 20-char aws access key>glance

 

# Do wecreate the bucket if it does not exist?

s3_store_create_bucket_on_put= False

 

#============ Image Cache Options ========================

 

image_cache_enabled= False

 

#Directory that the Image Cache writes data to

# Makesure this is also set in glance-pruner.conf

image_cache_datadir= /var/lib/glance/image-cache/

 

# Numberof seconds after which we should consider an incomplete image to be

#stalled and eligible for reaping

image_cache_stall_timeout= 86400

 

#============ Delayed Delete Options =============================

 

# Turnon/off delayed delete

delayed_delete= False

 

#Delayed delete time in seconds

scrub_time= 43200

 

#Directory that the scrubber will use to remind itself of what to delete

# Makesure this is also set in glance-scrubber.conf

scrubber_datadir= /var/lib/glance/scrubber

·        在/etc/glance建立glance-api-paste.ini作为GLANCE-API服务认证配置文件,内容如下:

[pipeline:glance-api]

#pipeline= versionnegotiation context apiv1app

# NOTE:use the following pipeline for keystone

pipeline= versionnegotiation authtoken context apiv1app

 

# Toenable Image Cache Management API replace pipeline with below:

#pipeline = versionnegotiation context imagecache apiv1app

# NOTE:use the following pipeline for keystone auth (with caching)

#pipeline = versionnegotiation authtoken auth-context imagecache apiv1app

 

[app:apiv1app]

paste.app_factory= glance.common.wsgi:app_factory

glance.app_factory= glance.api.v1.router:API

 

[filter:versionnegotiation]

paste.filter_factory= glance.common.wsgi:filter_factory

glance.filter_factory= glance.api.middleware.version_negotiation:VersionNegotiationFilter

 

[filter:cache]

paste.filter_factory= glance.common.wsgi:filter_factory

glance.filter_factory= glance.api.middleware.cache:CacheFilter

 

[filter:cachemanage]

paste.filter_factory= glance.common.wsgi:filter_factory

glance.filter_factory= glance.api.middleware.cache_manage:CacheManageFilter

 

[filter:context]

paste.filter_factory= glance.common.wsgi:filter_factory

glance.filter_factory= glance.common.context:ContextMiddleware

 

[filter:authtoken]

paste.filter_factory= keystone.middleware.auth_token:filter_factory

service_host= 60.12.206.105

service_port= 5000

service_protocol= http

auth_host= 60.12.206.105

auth_port= 35357

auth_protocol= http

auth_uri= http:/60.12.206.105:500/

admin_tenant_name= tenant

admin_user= glance

admin_password= service

·        在/etc/glance建立glance-registry.conf作为GLANCE-REGISTRY服务配置文件,内容如下:

[DEFAULT]

# Showmore verbose log output (sets INFO log level output)

verbose= True

 

# Showdebugging output in logs (sets DEBUG log level output)

debug =True

 

#Address to bind the registry server

bind_host= 0.0.0.0

 

# Portthe bind the registry server to

bind_port= 9191

 

# Log tothis file. Make sure you do not set the same log

# filefor both the API and registry servers!

log_file= /var/log/glance/registry.log

 

# Whereto store images

filesystem_store_datadir= /var/lib/glance/images

 

# Sendlogs to syslog (/dev/log) instead of to file specified by `log_file`

use_syslog= False

 

#SQLAlchemy connection string for the reference implementation

#registry server. Any valid SQLAlchemy connection string is fine.

# See:http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine

sql_connection= mysql://root:openstack@localhost/glance

 

# Periodin seconds after which SQLAlchemy should reestablish its connection

# to thedatabase.

#

# MySQLuses a default `wait_timeout` of 8 hours, after which it will drop

# idleconnections. This can result in 'MySQL Gone Away' exceptions. If you

# noticethis, you can lower this value to ensure that SQLAlchemy reconnects

# beforeMySQL can drop the connection.

sql_idle_timeout= 3600

 

# Limitthe api to return `param_limit_max` items in a call to a container. If

# alarger `limit` query param is provided, it will be reduced to this value.

api_limit_max= 1000

 

# If a`limit` query param is not provided in an api request, it will

#default to `limit_param_default`

limit_param_default= 25

·        在/etc/glance建立glance-registry-paste.ini作为GLANCE-REGISTRY服务认证配置文件,内容如下:

[pipeline:glance-registry]

#pipeline= context registryapp

# NOTE:use the following pipeline for keystone

pipeline= authtoken context registryapp

 

[app:registryapp]

paste.app_factory= glance.common.wsgi:app_factory

glance.app_factory= glance.registry.api.v1:API

 

[filter:context]

context_class= glance.registry.context.RequestContext

paste.filter_factory= glance.common.wsgi:filter_factory

glance.filter_factory= glance.common.context:ContextMiddleware

 

[filter:authtoken]

paste.filter_factory= keystone.middleware.auth_token:filter_factory

service_host= 60.12.206.105

service_port= 5000

service_protocol= http

auth_host= 60.12.206.105

auth_port= 35357

auth_protocol= http

auth_uri= http:/60.12.206.105:500/

admin_tenant_name= tenant

admin_user= glance

admin_password= service

·        在/etc/glance建立policy.json作为GLANCE服务策略文件,内容如下:

{

    "default": [],

    "manage_image_cache":[["role:admin"]]

}

·        在/etc/init.d/下建立名为glance-api的GLANCE-API服务启动脚本,内容如下:

#!/bin/sh

#

#glance-api OpenStack Image Service API server

#

#chkconfig:   - 20 80

#description: OpenStack Image Service (code-named Glance) API server

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: Glance API server

# Description:OpenStack Image Service (code-named Glance) API server

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=api

prog=openstack-glance-$suffix

exec="/usr/bin/glance-$suffix"

config="/etc/glance/glance-$suffix.conf"

pidfile="/var/run/glance/glance-$suffix.pid"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/subsys/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user glance --pidfile $pidfile"$exec --config-file=$config &>/dev/null & echo \$! >$pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        在/etc/init.d/下建立名为glance-registry的GLANCE-REGISTRY服务启动脚本,内容如下:

#!/bin/sh

#

#glance-registry OpenStack Image Service Registry server

#

#chkconfig:   - 20 80

#description: OpenStack Image Service (code-named Glance) Registry server

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: Glance Registry server

#Description: OpenStack Image Service (code-named Glance) Registry server

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=registry

prog=openstack-glance-$suffix

exec="/usr/bin/glance-$suffix"

config="/etc/glance/glance-$suffix.conf"

pidfile="/var/run/glance/glance-$suffix.pid"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/subsys/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user glance --pidfile $pidfile"$exec --config-file=$config &>/dev/null & echo \$! >$pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        配置启动脚本:

chmod755 /etc/init.d/glance-api

chmod755 /etc/init.d/glance-registry

mkdir/var/run/glance

mkdir/var/lock/glance

mkdir/var/lib/glance

chownglance:root /var/run/glance

chownglance:root /var/lock/glance

chownglance:glance /var/lib/glance

·        启动GLANCE-API和GLANCE-REGISTRY服务

/etc/init.d/glance-apistart

/etc/init.d/glance-registrystart

·        检测服务是否正常启动

通过netstat-ltunp查看是否有tcp 9292和tcp 9191端口监听
如果没有正常启动请查看/var/log/glance目录下相关文件排错

3.16 NOVA计算服务配置

·        建立NOVA服务数据库

mysql-uroot -popenstack -e 'create database nova'

·        建立NOVA服务配置文件存放目录

mkdir/etc/nova

·        建立NOVA服务启动用户

useradd-s /sbin/nologin -m -d /var/log/nova nova

·        在/etc/nova建立nova.conf作为NOVA服务配置文件,内容如下:

[DEFAULT]

 

debug=True

log-dir=/var/log/nova

pybasedir=/var/lib/nova

use_syslog=False

verbose=True

api_paste_config=/etc/nova/api-paste.ini

auth_strategy=keystone

bindir=/usr/bin

glance_host=$my_ip

glance_port=9292

glance_api_servers=$glance_host:$glance_port

image_service=nova.image.glance.GlanceImageService

lock_path=/var/lock/nova

my_ip=60.12.206.105

rabbit_host=localhost

rabbit_password=openstack

rabbit_port=5672

rabbit_userid=guest

root_helper=sudo

sql_connection=mysql://root:gamewave@localhost/nova

keystone_ec2_url=http://$my_ip:5000/v2.0/ec2tokens

novncproxy_base_url=http://$my_ip:6080/vnc_auto.html

vnc_enabled=True

vnc_keymap=en-us

vncserver_listen=$my_ip

vncserver_proxyclient_address=$my_ip

dhcpbridge=$bindir/nova-dhcpbridge

dhcpbridge_flagfile=/etc/nova/nova.conf

public_interface=eth0

routing_source_ip=$my_ip

fixed_range=10.0.0.0/24

flat_interface=eth1

flat_network_bridge=br1

floating_range=60.12.206.115

force_dhcp_release=True

target_host=$my_ip

target_port=3260

console_token_ttl=600

iscsi_helper=ietadm

iscsi_ip_address=$my_ip

iscsi_num_targets=100

iscsi_port=3260

volume_group=nova-volumes

ec2_listen=0.0.0.0

ec2_listen_port=8773

metadata_listen=0.0.0.0

metadata_listen_port=8775

osapi_compute_listen=0.0.0.0

osapi_compute_listen_port=8774

osapi_volume_listen=0.0.0.0

osapi_volume_listen_port=8776

·        在/etc/nova建立api-paste.ini作为NOVA服务认证配置文件,内容如下:

############

#Metadata #

############

[composite:metadata]

use =egg:Paste#urlmap

/:metaversions

/latest:meta

/1.0:meta

/2007-01-19:meta

/2007-03-01:meta

/2007-08-29:meta

/2007-10-10:meta

/2007-12-15:meta

/2008-02-01:meta

/2008-09-01:meta

/2009-04-04:meta

 

[pipeline:metaversions]

pipeline= ec2faultwrap logrequest metaverapp

 

[pipeline:meta]

pipeline= ec2faultwrap logrequest metaapp

 

[app:metaverapp]

paste.app_factory= nova.api.metadata.handler:Versions.factory

 

[app:metaapp]

paste.app_factory= nova.api.metadata.handler:MetadataRequestHandler.factory

 

#######

# EC2 #

#######

 

[composite:ec2]

use =egg:Paste#urlmap

/services/Cloud:ec2cloud

 

[composite:ec2cloud]

use =call:nova.api.auth:pipeline_factory

noauth =ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor

deprecated= ec2faultwrap logrequest authenticate cloudrequest validator ec2executor

keystone= ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor

 

[filter:ec2faultwrap]

paste.filter_factory= nova.api.ec2:FaultWrapper.factory

 

[filter:logrequest]

paste.filter_factory= nova.api.ec2:RequestLogging.factory

 

[filter:ec2lockout]

paste.filter_factory= nova.api.ec2:Lockout.factory

 

[filter:totoken]

paste.filter_factory= nova.api.ec2:EC2Token.factory

 

[filter:ec2keystoneauth]

paste.filter_factory= nova.api.ec2:EC2KeystoneAuth.factory

 

[filter:ec2noauth]

paste.filter_factory= nova.api.ec2:NoAuth.factory

 

[filter:authenticate]

paste.filter_factory= nova.api.ec2:Authenticate.factory

 

[filter:cloudrequest]

controller= nova.api.ec2.cloud.CloudController

paste.filter_factory= nova.api.ec2:Requestify.factory

 

[filter:authorizer]

paste.filter_factory= nova.api.ec2:Authorizer.factory

 

[filter:validator]

paste.filter_factory= nova.api.ec2:Validator.factory

 

[app:ec2executor]

paste.app_factory= nova.api.ec2:Executor.factory

 

#############

#Openstack #

#############

 

[composite:osapi_compute]

use =call:nova.api.openstack.urlmap:urlmap_factory

/:oscomputeversions

/v1.1:openstack_compute_api_v2

/v2:openstack_compute_api_v2

 

[composite:osapi_volume]

use =call:nova.api.openstack.urlmap:urlmap_factory

/:osvolumeversions

/v1:openstack_volume_api_v1

 

[composite:openstack_compute_api_v2]

use =call:nova.api.auth:pipeline_factory

noauth =faultwrap noauth ratelimit osapi_compute_app_v2

deprecated= faultwrap auth ratelimit osapi_compute_app_v2

keystone= faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2

keystone_nolimit= faultwrap authtoken keystonecontext osapi_compute_app_v2

 

[composite:openstack_volume_api_v1]

use =call:nova.api.auth:pipeline_factory

noauth =faultwrap noauth ratelimit osapi_volume_app_v1

deprecated= faultwrap auth ratelimit osapi_volume_app_v1

keystone= faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1

keystone_nolimit= faultwrap authtoken keystonecontext osapi_volume_app_v1

 

[filter:faultwrap]

paste.filter_factory= nova.api.openstack:FaultWrapper.factory

 

[filter:auth]

paste.filter_factory= nova.api.openstack.auth:AuthMiddleware.factory

 

[filter:noauth]

paste.filter_factory= nova.api.openstack.auth:NoAuthMiddleware.factory

 

[filter:ratelimit]

paste.filter_factory= nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

 

[app:osapi_compute_app_v2]

paste.app_factory= nova.api.openstack.compute:APIRouter.factory

 

[pipeline:oscomputeversions]

pipeline= faultwrap oscomputeversionapp

 

[app:osapi_volume_app_v1]

paste.app_factory= nova.api.openstack.volume:APIRouter.factory

 

[app:oscomputeversionapp]

paste.app_factory= nova.api.openstack.compute.versions:Versions.factory

 

[pipeline:osvolumeversions]

pipeline= faultwrap osvolumeversionapp

 

[app:osvolumeversionapp]

paste.app_factory= nova.api.openstack.volume.versions:Versions.factory

 

##########

# Shared#

##########

 

[filter:keystonecontext]

paste.filter_factory= nova.api.auth:NovaKeystoneContext.factory

 

[filter:authtoken]

paste.filter_factory= keystone.middleware.auth_token:filter_factory

service_protocol= http

service_host= 60.12.206.105

service_port= 5000

auth_host= 60.12.206.105

auth_port= 35357

auth_protocol= http

auth_uri= http://60.12.206.105:5000/

admin_tenant_name= tenant

admin_user= nova

admin_password= service

·        在/etc/nova建立policy.json作为NOVA服务策略文件,内容如下:

{

    "admin_or_owner":  [["role:admin"],["project_id:%(project_id)s"]],

    "default":[["rule:admin_or_owner"]],

 

    "compute:create": [],

    "compute:create:attach_network":[],

    "compute:create:attach_volume":[],

    "compute:get_all": [],

 

    "admin_api":[["role:admin"]],

    "compute_extension:accounts":[["rule:admin_api"]],

   "compute_extension:admin_actions":[["rule:admin_api"]],

    "compute_extension:admin_actions:pause":[["rule:admin_or_owner"]],

   "compute_extension:admin_actions:unpause":[["rule:admin_or_owner"]],

   "compute_extension:admin_actions:suspend":[["rule:admin_or_owner"]],

   "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]],

   "compute_extension:admin_actions:lock":[["rule:admin_api"]],

   "compute_extension:admin_actions:unlock":[["rule:admin_api"]],

   "compute_extension:admin_actions:resetNetwork":[["rule:admin_api"]],

    "compute_extension:admin_actions:injectNetworkInfo":[["rule:admin_api"]],

   "compute_extension:admin_actions:createBackup":[["rule:admin_or_owner"]],

   "compute_extension:admin_actions:migrateLive":[["rule:admin_api"]],

   "compute_extension:admin_actions:migrate": [["rule:admin_api"]],

    "compute_extension:aggregates":[["rule:admin_api"]],

    "compute_extension:certificates":[],

    "compute_extension:cloudpipe":[["rule:admin_api"]],

   "compute_extension:console_output": [],

    "compute_extension:consoles": [],

    "compute_extension:createserverext":[],

   "compute_extension:deferred_delete": [],

    "compute_extension:disk_config":[],

   "compute_extension:extended_server_attributes":[["rule:admin_api"]],

   "compute_extension:extended_status": [],

    "compute_extension:flavorextradata":[],

   "compute_extension:flavorextraspecs": [],

    "compute_extension:flavormanage":[["rule:admin_api"]],

   "compute_extension:floating_ip_dns": [],

   "compute_extension:floating_ip_pools": [],

    "compute_extension:floating_ips":[],

    "compute_extension:hosts":[["rule:admin_api"]],

    "compute_extension:keypairs": [],

    "compute_extension:multinic": [],

    "compute_extension:networks":[["rule:admin_api"]],

    "compute_extension:quotas": [],

    "compute_extension:rescue": [],

   "compute_extension:security_groups": [],

   "compute_extension:server_action_list":[["rule:admin_api"]],

   "compute_extension:server_diagnostics":[["rule:admin_api"]],

   "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]],

   "compute_extension:simple_tenant_usage:list":[["rule:admin_api"]],

    "compute_extension:users":[["rule:admin_api"]],

   "compute_extension:virtual_interfaces": [],

   "compute_extension:virtual_storage_arrays": [],

    "compute_extension:volumes": [],

    "compute_extension:volumetypes":[],

 

    "volume:create": [],

    "volume:get_all": [],

    "volume:get_volume_metadata": [],

    "volume:get_snapshot": [],

    "volume:get_all_snapshots": [],

 

    "network:get_all_networks": [],

    "network:get_network": [],

    "network:delete_network": [],

    "network:disassociate_network":[],

    "network:get_vifs_by_instance":[],

    "network:allocate_for_instance":[],

   "network:deallocate_for_instance": [],

    "network:validate_networks": [],

   "network:get_instance_uuids_by_ip_filter": [],

 

    "network:get_floating_ip": [],

    "network:get_floating_ip_pools":[],

   "network:get_floating_ip_by_address": [],

   "network:get_floating_ips_by_project": [],

    "network:get_floating_ips_by_fixed_address":[],

    "network:allocate_floating_ip":[],

    "network:deallocate_floating_ip":[],

    "network:associate_floating_ip":[],

   "network:disassociate_floating_ip": [],

 

    "network:get_fixed_ip": [],

    "network:add_fixed_ip_to_instance":[],

   "network:remove_fixed_ip_from_instance": [],

    "network:add_network_to_project":[],

    "network:get_instance_nw_info":[],

 

    "network:get_dns_domains": [],

    "network:add_dns_entry": [],

    "network:modify_dns_entry": [],

    "network:delete_dns_entry": [],

   "network:get_dns_entries_by_address": [],

   "network:get_dns_entries_by_name": [],

   "network:create_private_dns_domain": [],

   "network:create_public_dns_domain": [],

    "network:delete_dns_domain": []

}

·        在/etc/init.d/下建立名为nova-api的NOVA-API服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-api  OpenStack Nova APIServer

#

#chkconfig:   - 20 80

#description: At the heart of the cloud framework is an API Server. \

#              This API Server makes command andcontrol of the      \

#              hypervisor, storage, andnetworking programmatically  \

#              available to users in realizationof the definition   \

#              of cloud computing.

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova API Server

#Description: At the heart of the cloud framework is an API Server.

#              This API Server makes command andcontrol of the

#              hypervisor, storage, and networkingprogrammatically

#              available to users in realizationof the definition

#              of cloud computing.

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=api

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        在/etc/init.d/下建立名为nova-network的NOVA-NETWORK服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-network  OpenStack NovaNetwork Controller

#

#chkconfig:   - 20 80

#description: The Network Controller manages the networking resources \

#              on host machines. The API serverdispatches commands    \

#              through the message queue, whichare subsequently       \

#              processed by NetworkControllers.                       \

#              Specific operations include:                            \

#              * Allocate Fixed IPAddresses                           \

#              * Configuring VLANs forprojects                        \

#              * Configuring networks forcompute nodes                \

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova Network Controller

#Description: The Network Controller manages the networking resources

#              on host machines. The API serverdispatches commands

#              through the message queue, whichare subsequently

#              processed by Network Controllers.

#              Specific operations include:

#              * Allocate Fixed IP Addresses

#              * Configuring VLANs for projects

#              * Configuring networks forcompute nodes

### ENDINIT INFO

 

. /etc/rc.d/init.d/functions

 

suffix=network

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        在/etc/init.d/下建立名为nova-objectstore的NOVA-OBJECTSTORE服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-objectstore  OpenStackNova Object Storage

#

#chkconfig:   - 20 80

#description: Implementation of an S3-like storage server based on local files.

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova Object Storage

#Description: Implementation of an S3-like storage server based on local files.

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=objectstore

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        在/etc/init.d/下建立名为nova-scheduler的NOVA-SCHEDULER服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-scheduler  OpenStack NovaScheduler

#

#chkconfig:   - 20 80

#description: Determines which physical hardware to allocate to a virtualresource

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

# Required-Stop:$remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova Scheduler

#Description: Determines which physical hardware to allocate to a virtualresource

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=scheduler

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x$exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        在/etc/init.d/下建立名为nova-volume的NOVA-VOLUME服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-volume  OpenStack NovaVolume Worker

#

#chkconfig:   - 20 80

#description:  Volume Workers interactwith iSCSI storage to manage    \

#               LVM-based instance volumes.Specific functions include: \

#               * Create Volumes                                        \

#               * Delete Volumes                                        \

#               * Establish Compute volumes

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

# Default-Stop:0 1 6

#Short-Description: OpenStack Nova Volume Worker

#Description:  Volume Workers interactwith iSCSI storage to manage

#               LVM-based instance volumes.Specific functions include:

#               * Create Volumes

#               * Delete Volumes

#               * Establish Compute volumes

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=volume

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        配置启动脚本:

chmod755 /etc/init.d/nova-api

chmod755 /etc/init.d/nova-network

chmod755 /etc/init.d/nova-objectstore

chmod755 /etc/init.d/nova-scheduler

chmod755 /etc/init.d/nova-volume

mkdir/var/run/nova

mkdir -p/var/lib/nova/instances

mkdir/var/lock/nova

chownnova:root /var/run/nova

chown -Rnova:nova /var/lib/nova

chownnova:root /var/lock/nova

·        配置sudo
在/etc/sudoers.d/建立nova文件,内容如下:

Defaults:nova!requiretty

Cmnd_AliasNOVACMDS = /bin/aoe-stat,                           \

                      /bin/chmod,                               \

                      /bin/chmod/var/lib/nova/tmp/*/root/.ssh, \

                      /bin/chown,                               \

                      /bin/chown /var/lib/nova/tmp/*/root/.ssh,\

                      /bin/dd,                                  \

                      /bin/kill,                                \

                      /bin/mkdir,                               \

                      /bin/mount,                               \

                      /bin/umount,                              \

                      /sbin/aoe-discover,                       \

                      /sbin/ifconfig,                           \

                      /sbin/ip,                                 \

                     /sbin/ip6tables-restore,                 \

                     /sbin/ip6tables-save,                    \

                      /sbin/iptables,                           \

                      /sbin/iptables-restore,                   \

                      /sbin/iptables-save,                      \

                      /sbin/iscsiadm,                           \

                      /sbin/kpartx,                             \

                      /sbin/losetup,                            \

                      /sbin/lvcreate,                           \

                      /sbin/lvdisplay,                          \

                      /sbin/lvremove,                           \

                      /sbin/pvcreate,                           \

                      /sbin/route,                              \

                      /sbin/tune2fs,                            \

                      /sbin/vconfig,                            \

                      /sbin/vgcreate,                           \

                      /sbin/vgs,                                \

                      /usr/bin/fusermount,                      \

                      /usr/bin/guestmount,                      \

                      /usr/bin/socat,                           \

                      /bin/cat,                           \

                      /usr/bin/tee,                             \

                      /usr/bin/qemu-nbd,                        \

                      /usr/bin/virsh,                           \

                      /usr/sbin/brctl,                          \

                      /usr/sbin/dnsmasq,                        \

                      /usr/sbin/ietadm,                         \

                      /usr/sbin/radvd,                          \

                      /usr/sbin/tgtadm,                         \

                      /usr/sbin/vblade-persist

 

nova ALL= (root) NOPASSWD: SETENV: NOVACMDS

chmod0440 /etc/sudoers.d/nova

·        配置polkit策略

在/etc/polkit-1/localauthority/50-local.d/建立50-nova.pkla,内容如下:

[Allownova libvirt management permissions]

Identity=unix-user:nova

Action=org.libvirt.unix.manage

ResultAny=yes

ResultInactive=yes

ResultActive=yes

·        建立NOVA服务数据库结构

nova-managedb sync

·        安装iscsitarget

wget http://sourceforge.net/projects/iscsitarget/files/iscsitarget/1.4.20.2/iscsitarget-1.4.20.2.tar.gz/download -P /opt
cd /opt
tar xf iscsitarget-1.4.20.2.tar.gz
cd iscsitarget-1.4.20.2
make
make install
/etc/init.d/iscsi-target startnetstat -ltnp 查看是否有tcp 3260端口监听

·        建立nova-volumes卷

fdisk/dev/sdb

n

p

1

两次回车

t

83

w

mkfs.ext4/dev/sdb1

vgcreatenova-volumes /dev/sdb1

·        启动NOVA相关服务

/etc/init.d/nova-apistart

/etc/init.d/nova-networkstart

/etc/init.d/nova-objectstorestart

/etc/init.d/nova-schedulerstart

/etc/init.d/nova-volumestart

·        检测服务是否正常启动

通过netstat-ltunp查看是否有tcp 3333、8773、8774、8775、8776端口监听
如果没有正常启动请查看/var/log/nova目录下相关文件排错

3.17 SWIFT对象存储服务配置

·        建立SWIFT服务配置文件存放目录

mkdir/etc/swift

·        建立SWIFT服务启动用户

useradd-s /sbin/nologin -m -d /var/log/swift swift

·        格式化硬盘及挂载

yum -yinstall xfsprogs

mkfs.xfs-f -i size=1024 /dev/sdc

mkfs.xfs-f -i size=1024 /dev/sdd

mkdir -p/swift/drivers/sd{c,d}

mount -txfs -o noatime,nodiratime,nobarrier,logbufs=8 /dev/sdc /swift/drivers/sdc

mount -txfs -o noatime,nodiratime,nobarrier,logbufs=8 /dev/sdd /swift/drivers/sdd

echo -e'/dev/sdc\t/swift/drivers/sdc\txfs\tnoatime,nodiratime,nobarrier,logbufs=8\t00' >>/etc/fstab

echo -e'/dev/sdd\t/swift/drivers/sdd\txfs\tnoatime,nodiratime,nobarrier,logbufs=8\t00' >>/etc/fstab

·        swift同步相关配置

mkdir -p/swift/node/sd{c,d}

ln -sv/swift/drivers/sdc /swift/node/sdc

ln -sv/swift/drivers/sdd /swift/node/sdd

在/etc下建立rsyncd.conf文件,内容如下:

uid =swift

gid =swift

log file= /var/log/rsyncd.log

pid file= /var/run/rsyncd.pid

address= 192.168.1.2

 

[account5000]

maxconnections = 50

path =/swift/node/sdc

readonly = false

lockfile = /var/lock/account5000.lock

 

[account5001]

maxconnections = 50

path =/swift/node/sdd

readonly = false

lockfile = /var/lock/account5001.lock

 

[container4000]

maxconnections = 50

path =/swift/node/sdc

readonly = false

lockfile = /var/lock/container4000.lock

 

[container4000]

maxconnections = 50

path =/swift/node/sdd

readonly = false

lockfile = /var/lock/container4001.lock

 

[object3000]

maxconnections = 50

path =/swift/node/sdc

readonly = false

lockfile = /var/lock/object3000.lock

 

[object3001]

maxconnections = 50

path = /swift/node/sdd

readonly = false

lockfile = /var/lock/object3001.lock

yum -yinstall xinetd

sed -i'/disable/s#yes#no#g' /etc/xinetd.d/rsync

/etc/init.d/xinetdstart

mkdir -p/etc/swift/{object,container,account}-server

在/etc/swift下建立swift.conf文件,内容如下:

[swift-hash]

swift_hash_path_suffix= changeme

在/etc/swift下建立proxy-server.conf文件,内容如下:

[DEFAULT]

bind_port= 8080

user =swift

swift_dir= /etc/swift

workers= 8

log_name= swift

log_facility= LOG_LOCAL1

log_level= DEBUG

 

[pipeline:main]

pipeline= healthcheck cache swift3 s3token authtoken keystone proxy-server

 

[app:proxy-server]

use =egg:swift#proxy

allow_account_management= true

account_autocreate= true

 

[filter:keystone]

paste.filter_factory= keystone.middleware.swift_auth:filter_factory

operator_roles= Member,admin,SwiftOperator

 

#NOTE(chmou): s3token middleware is not updated yet to use only

#username and password.

[filter:s3token]

paste.filter_factory= keystone.middleware.s3_token:filter_factory

service_port= 60.12.206.105

service_host= 5000

auth_host= 60.12.206.105

auth_port= 35357

auth_protocol= http

auth_token= ADMIN

admin_token= ADMIN

 

[filter:authtoken]

paste.filter_factory= keystone.middleware.auth_token:filter_factory

auth_host= 60.12.206.105

auth_port= 35357

auth_protocol= http

auth_uri= http://60.12.206.105:5000/

admin_tenant_name= tenant

admin_user= swift

admin_password= service

 

[filter:swift3]

use =egg:swift#swift3

 

[filter:healthcheck]

use =egg:swift#healthcheck

 

[filter:cache]

use =egg:swift#memcache

在/etc/swift/account-server下建立sdc.conf和sdd.conf文件,内容如下:
–——————sdc.conf——————–
[DEFAULT]
devices = /swift/node/sdc
mountcheck = false
bindport = 5000
user = swift
logfacility = LOGLOCAL0
swiftdir =/etc/swift

[pipeline:main]
pipeline = account-server

[app:account-server]
use = egg:swift#account

[account-replicator]
vmtestmode = yes

[account-auditor]

[account-reaper]

–——————sdd.conf——————–
[DEFAULT]
devices = /swift/node/sdd
mountcheck = false
bindport = 5001
user = swift
logfacility = LOGLOCAL0
swiftdir =/etc/swift

[pipeline:main]
pipeline = account-server

[app:account-server]
use = egg:swift#account

[account-replicator]
vmtestmode = yes

[account-auditor]

[account-reaper]

在/etc/swift/container-server下建立sdc.conf和sdd.conf文件,内容如下:

--------------------sdc.conf--------------------

[DEFAULT]

devices= /swift/node/sdc

mount_check= false

bind_port= 4000

user =swift

log_facility= LOG_LOCAL0

swift_dir= /etc/swift

 

[pipeline:main]

pipeline= container-server

 

[app:container-server]

use =egg:swift#container

 

[container-replicator]

vm_test_mode= yes

 

[container-updater]

 

[container-auditor]

 

[container-sync]

 

--------------------sdd.conf--------------------

[DEFAULT]

devices= /swift/node/sdd

mount_check= false

bind_port= 4001

user =swift

log_facility= LOG_LOCAL0

swift_dir= /etc/swift

 

[pipeline:main]

pipeline= container-server

 

[app:container-server]

use =egg:swift#container

 

[container-replicator]

vm_test_mode= yes

 

[container-updater]

 

[container-auditor]

 

[container-sync]

在/etc/swift/object-server下建立sdc.conf和sdd.conf文件,内容如下:

--------------------sdc.conf--------------------

[DEFAULT]

devices= /swift/node/sdc

mount_check= false

bind_port= 3000

user =swift

log_facility= LOG_LOCAL0

swift_dir= /etc/swift

 

[pipeline:main]

pipeline= object-server

 

[app:object-server]

use =egg:swift#object

 

[object-replicator]

vm_test_mode= yes

 

[object-updater]

 

[object-auditor]

 

[object-expirer]

 

--------------------sdd.conf--------------------

[DEFAULT]

devices= /swift/node/sdd

mount_check= false

bind_port= 3001

user =swift

log_facility= LOG_LOCAL0

swift_dir= /etc/swift

 

[pipeline:main]

pipeline= object-server

 

[app:object-server]

use =egg:swift#object

 

[object-replicator]

vm_test_mode= yes

 

[object-updater]

 

[object-auditor]

 

[object-expirer]

·        建立ring

cd/etc/swift

swift-ring-builderaccount.builder create 8 2 1

swift-ring-builderaccount.builder add z1-60.12.206.105:5000/sdc 1

swift-ring-builderaccount.builder add z2-60.12.206.105:5001/sdd 1

swift-ring-builderaccount.builder rebalance

 

swift-ring-buildercontainer.builder create 8 2 1

swift-ring-buildercontainer.builder add z1-60.12.206.105:4000/sdc 1

swift-ring-buildercontainer.builder add z2-60.12.206.105:4001/sdd 1

swift-ring-buildercontainer.builder rebalance

 

swift-ring-builderobject.builder create 8 2 1

swift-ring-builderobject.builder add z1-60.12.206.105:3000/sdc 1

swift-ring-builderobject.builder add z2-60.12.206.105:3001/sdd 1

swift-ring-builderobject.builder rebalance

·        设置权限

chown -Rswift:swift /swift

·        建立各服务启动脚本

在/etc/swift下建立functions文件,内容如下:

./etc/rc.d/init.d/functions

 

swift_action(){

  retval=0

  server="$1"

  call="swift_$2"

 

  if [[ -f"/etc/swift/$server-server.conf" ]]; then

    $call "$server" \

         "/etc/swift/$server-server.conf" \

         "/var/run/swift/$server-server.pid"

    [ $? -ne 0 ] && retval=1

  elif [[ -d"/etc/swift/$server-server/" ]]; then

    declare -i count=0

    for name in $( ls"/etc/swift/$server-server/" ); do

      $call "$server" \

           "/etc/swift/$server-server/$name" \

            "/var/run/swift/$server-server/$count.pid"

      [ $? -ne 0 ] && retval=1

      count=$count+1

    done

  fi

  return $retval

}

 

swift_start(){

  name="$1"

  long_name="$name-server"

  conf_file="$2"

  pid_file="$3"

 

  ulimit -n ${SWIFT_MAX_FILES-32768}

  echo -n "Starting swift-$long_name:"

  daemon --pidfile $pid_file \

    "/usr/bin/swift-$long_name $conf_file&>/var/log/swift-startup.log & echo \$! > $pid_file"

  retval=$?

  echo

  return $retval

}

 

swift_stop(){

  name="$1"

  long_name="$name-server"

  conf_name="$2"

  pid_file="$3"

 

  echo -n "Stopping swift-$long_name:"

  killproc -p $pid_file -d${SWIFT_STOP_DELAY-15} $long_name

  retval=$?

  echo

  return $retval

}

 

swift_status(){

  name="$1"

  long_name="$name-server"

  conf_name="$2"

  pid_file="$3"

 

  status -p $pid_file $long_name

}

在/etc/init.d下建立swift-proxy文件,内容如下:

#!/bin/sh

 

###BEGIN INIT INFO

#Provides:          openstack-swift-proxy

#Required-Start:    $remote_fs

#Required-Stop:     $remote_fs

#Default-Stop:      0 1 6

#Short-Description: Swift proxy server

#Description:       Account server forswift.

### ENDINIT INFO

 

#openstack-swift-proxy: swift proxy server

#

#chkconfig: - 20 80

#description: Proxy server for swift.

 

./etc/rc.d/init.d/functions

./etc/swift/functions

 

name="proxy"

 

[ -e"/etc/sysconfig/openstack-swift-$name" ] && ."/etc/sysconfig/openstack-swift-$name"

 

lockfile="/var/lock/swift/openstack-swift-proxy"

 

start(){

    swift_action "$name" start

    retval=$?

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    swift_action "$name" stop

    retval=$?

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

rh_status(){

    swift_action "$name" status

}

 

rh_status_q(){

    rh_status &> /dev/null

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart}"

        exit 2

esac

exit $?

在/etc/init.d下建立swift-account文件,内容如下:

#!/bin/sh

 

###BEGIN INIT INFO

#Provides:          openstack-swift-account

#Required-Start:    $remote_fs

#Required-Stop:     $remote_fs

#Default-Stop:      0 1 6

#Short-Description: Swift account server

#Description:       Account server forswift.

### ENDINIT INFO

 

#openstack-swift-account: swift account server

#

#chkconfig: - 20 80

#description: Account server for swift.

 

./etc/rc.d/init.d/functions

./etc/swift/functions

 

name="account"

 

[ -e"/etc/sysconfig/openstack-swift-$name" ] && ."/etc/sysconfig/openstack-swift-$name"

 

lockfile="/var/lock/swift/openstack-swift-account"

 

start(){

    swift_action "$name" start

    retval=$?

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    swift_action "$name" stop

    retval=$?

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

rh_status(){

    swift_action "$name" status

}

 

rh_status_q(){

    rh_status &> /dev/null

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart}"

        exit 2

esac

exit $?

·        在/etc/init.d下建立swift-container文件,内容如下:

#!/bin/sh

 

###BEGIN INIT INFO

#Provides:         openstack-swift-container

#Required-Start:    $remote_fs

#Required-Stop:     $remote_fs

#Default-Stop:      0 1 6

#Short-Description: Swift container server

#Description:       Container server forswift.

### ENDINIT INFO

 

#openstack-swift-container: swift container server

#

#chkconfig: - 20 80

#description: Container server for swift.

 

./etc/rc.d/init.d/functions

./etc/swift/functions

 

name="container"

 

[ -e"/etc/sysconfig/openstack-swift-$name" ] && ."/etc/sysconfig/openstack-swift-$name"

 

lockfile="/var/lock/swift/openstack-swift-container"

 

start(){

    swift_action "$name" start

    retval=$?

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    swift_action "$name" stop

    retval=$?

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

rh_status(){

    swift_action "$name" status

}

 

rh_status_q(){

    rh_status &> /dev/null

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart}"

        exit 2

esac

exit $?

·        在/etc/init.d下建立swift-object文件,内容如下:

#!/bin/sh

 

###BEGIN INIT INFO

#Provides:          openstack-swift-object

#Required-Start:    $remote_fs

#Required-Stop:     $remote_fs

#Default-Stop:      0 1 6

#Short-Description: Swift object server

#Description:       Object server forswift.

### ENDINIT INFO

 

#openstack-swift-object: swift object server

#

#chkconfig: - 20 80

#description: Object server for swift.

 

./etc/rc.d/init.d/functions

./etc/swift/functions

 

name="object"

 

[ -e"/etc/sysconfig/openstack-swift-$name" ] && ."/etc/sysconfig/openstack-swift-$name"

 

lockfile="/var/lock/swift/openstack-swift-object"

 

start(){

    swift_action "$name" start

    retval=$?

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    swift_action "$name" stop

    retval=$?

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

rh_status(){

    swift_action "$name" status

}

 

rh_status_q(){

    rh_status &> /dev/null

}

 

case"$1" in

    start)

        rh_status_q && exit 0

       $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart}"

        exit 2

esac

exit $?

·        配置启动脚本:

chmod755 /etc/init.d/swift-proxy

chmod755 /etc/init.d/swift-account

chmod755 /etc/init.d/swift-container

chmod755 /etc/init.d/swift-object

mkdir/var/run/swift

mkdir/var/lock/swift

chownswift:root /var/run/swift

chownswift:root /var/lock/swift

·        启动服务

/etc/init.d/swift-proxystart

/etc/init.d/swift-accountstart

/etc/init.d/swift-containerstart

/etc/init.d/swift-objectstart

3.18 HORIZON管理面板配置

****建立KEYSTONE服务数据库

mysql-uroot -popenstack -e 'create database dashboard'

·        配置apache

编辑/etc/httpd/conf.d/django.conf,更改成如下内容:

WSGISocketPrefix/tmp/horizon

<VirtualHost*:80>

    WSGIScriptAlias //opt/horizon-2012.1/openstack_dashboard/wsgi/django.wsgi

    WSGIDaemonProcess horizon user=apachegroup=apache processes=3 threads=10

    SetEnv APACHE_RUN_USER apache

    SetEnv APACHE_RUN_GROUP apache

    WSGIProcessGroup horizon

 

    DocumentRoot/opt/horizon-2012.1/.blackhole/

    Alias /media/opt/horizon-2012.1/openstack_dashboard/static

 

    <Directory />

        Options FollowSymLinks

        AllowOverride None

    </Directory>

 

    <Directory /opt/horizon-2012.1/>

        Options Indexes FollowSymLinksMultiViews

        AllowOverride None

        Order allow,deny

        allow from all

    </Directory>

 

    ErrorLog /var/log/httpd/error.log

    LogLevel warn

    CustomLog /var/log/httpd/access.logcombined

</VirtualHost>

mkdir/opt/horizon-2012.1/.blackhole

·        配置HORIZON

在/opt/horizon-2012.1/openstackdashboard/local下建立localsettings.py文件,内容如下:

importos

 

DEBUG =False

TEMPLATE_DEBUG= DEBUG

PROD =False

USE_SSL= False

 

LOCAL_PATH= os.path.dirname(os.path.abspath(__file__))

 

# FIXME:We need to change this to mysql, instead of sqlite.

DATABASES= {

    'default': {

    'ENGINE': 'django.db.backends.mysql',

    'NAME': 'dashboard',

    'USER': 'root',

    'PASSWORD': 'openstack',

    'HOST': 'localhost',

    'PORT': '3306',

    },

}

 

# Thedefault values for these two settings seem to cause issues with apache

CACHE_BACKEND= 'dummy://'

SESSION_ENGINE= 'django.contrib.sessions.backends.cached_db'

 

# Sendemail to the console by default

EMAIL_BACKEND= 'django.core.mail.backends.console.EmailBackend'

# Orsend them to /dev/null

#EMAIL_BACKEND= 'django.core.mail.backends.dummy.EmailBackend'

 

#django-mailer uses a different settings attribute

MAILER_EMAIL_BACKEND= EMAIL_BACKEND

 

#Configure these for your outgoing email host

#EMAIL_HOST = 'smtp.my-company.com'

#EMAIL_PORT = 25

#EMAIL_HOST_USER = 'djangomail'

#EMAIL_HOST_PASSWORD = 'top-secret!'

 

HORIZON_CONFIG= {

    'dashboards': ('nova', 'syspanel','settings',),

    'default_dashboard': 'nova',

    'user_home':'openstack_dashboard.views.user_home',

}

 

#TODO(tres): Remove these once Keystone has an API to identify auth backend.

OPENSTACK_KEYSTONE_BACKEND= {

    'name': 'native',

    'can_edit_user': True

}

 

OPENSTACK_HOST= "60.12.206.105"

OPENSTACK_KEYSTONE_URL= "http://%s:5000/v2.0" % OPENSTACK_HOST

# FIXME:this is only needed until keystone fixes its GET /tenants call

# sothat it doesn't return everything for admins

OPENSTACK_KEYSTONE_ADMIN_URL= "http://%s:35357/v2.0" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_DEFAULT_ROLE= "Member"

 

SWIFT_PAGINATE_LIMIT= 100

 

# If youhave external monitoring links, eg:

#EXTERNAL_MONITORING = [

#     ['Nagios','http://foo.com'],

#     ['Ganglia','http://bar.com'],

# ]

 

#LOGGING= {

#        'version': 1,

#        # When set to True this will disableall logging except

#        # for loggers specified in thisconfiguration dictionary. Note that

#        # if nothing is specified here anddisable_existing_loggers is True,

#        # django.db.backends will still logunless it is disabled explicitly.

#        'disable_existing_loggers': False,

#        'handlers': {

#            'null': {

#                'level': 'DEBUG',

#                'class':'django.utils.log.NullHandler',

#                },

#            'console': {

#                # Set the level to"DEBUG" for verbose output logging.

#                'level': 'INFO',

#                'class':'logging.StreamHandler',

#                },

#            },

#        'loggers': {

#            # Logging from django.db.backendsis VERY verbose, send to null

#            # by default.

#            'django.db.backends': {

#                'handlers': ['null'],

#                'propagate': False,

#                },

#            'horizon': {

#                'handlers': ['console'],

#                'propagate': False,

#            },

#            'novaclient': {

#                'handlers': ['console'],

#                'propagate': False,

#            },

#            'keystoneclient': {

#                'handlers': ['console'],

#                'propagate': False,

#            },

#            'nose.plugins.manager': {

#                'handlers': ['console'],

#                'propagate': False,

#            }

#        }

#}

·        静态化django

编辑/opt/horizon-2012.1/openstack_dashboard/urls.py,在最下面添加如下内容:

ifsettings.DEBUG is False:

    urlpatterns += patterns('',

        url(r'^static/(?P<path>.*)$','django.views.static.serve', {

            'document_root':settings.STATIC_ROOT,

        }),

   )

python /opt/horizon-2012.1/manage.pycollectstatic,选择yes

·        建立HORIZON数据库结构

python/opt/horizon-2012.1/manage.py syncdb

·        重启apache服务

chown -Rapache:apache /opt/horizon-2012.1

/etc/init.d/httpdrestart

3.19 NOVNC WEB访问配置

·        编辑/etc/nova/nova.conf文件,添加如下内容:

novncproxy_base_url=http://$my_ip:6080/vnc_auto.html

vnc_enabled=true

vnc_keymap=en-us

vncserver_listen=$my_ip

vncserver_proxyclient_address=$my_ip

·        将NOVNC执行程序添加到环境变量中

ln -sv/opt/noVNC/utils/nova-novncproxy /usr/bin/

·        在/etc/init.d/下建立名为nova-novncproxy的NOVNC服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-novncproxy  OpenStack NovaVNC Web Console

#

#chkconfig:   - 20 80

#description: OpenStack Nova VNC Web Console

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova VNC Web Console

#Description: OpenStack Nova VNC Web Console

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=novncproxy

prog=openstack-nova-$suffix

web="/opt/noVNC"

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --web $web --logfile=$logfile --daemon&>/dev/null & echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [$retval -eq 0 ] && rm -f $lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

       exit 2

esac

exit $?

·        在/etc/init.d/下建立名为nova-consoleauth的控制台认证启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-novncproxy  OpenStack NovaConsole Auth

#

#chkconfig:   - 20 80

#description: OpenStack Nova Console Auth

 

###BEGIN INIT INFO

#Provides:

# Required-Start:$remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova Console Auth

#Description: OpenStack Nova Console Auth

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=consoleauth

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        配置启动脚本

chmod755 /etc/init.d/nova-novncproxy

chmod755 /etc/init.d/nova-consoleauth

·        启动GLANCE-API和GLANCE-REGISTRY服务

/etc/init.d/nova-novncproxystart

/etc/init.d/nova-consoleauthstart

·        检测服务是否正常启动

通过netstat-ltunp查看是否有tcp 6080端口监听
如果没有正常启动请查看/var/log/nova目录下相关文件排错

4 计算节点安装

4.1前提工作

·        导入第三方软件源

rpm -Uvhhttp://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm

rpm -Uvhhttp://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

·        安装依赖包

yum -yinstall swig libvirt-python libvirt qemu-kvm gcc make gcc-c++ patch m4python-devel libxml2-devel libxslt-devel libgsasl-devel openldap-develsqlite-devel openssl-devel wget telnet gpxe-bootimgs gpxe-roms gpxe-roms-qemudmidecode git scsi-target-utils kpartx socat vconfig aoetools python-pip

rpm -Uvhhttp://veillard.com/libvirt/6.3/x86_64/dnsmasq-utils-2.48-6.el6.x86_64.rpm

ln -sv/usr/bin/pip-python /usr/bin/pip

4.2 NTP时钟同步配置

·        安装NTP相关命令包

yum -yinstall ntpdate

·        跟控制节点同步时间并写入硬件

ntpdate192.168.1.2

hwclock-w

·        将时间同步添加到计划任务

echo '308 * * * root /usr/sbin/ntpdate 192.168.1.2;hwclock -w' >>/etc/crontab

4.3 PYTHON-NOVACLIENT库安装

·        下载源码包

wgethttps://launchpad.net/nova/essex/2012.1/+download/python-novaclient-2012.1.tar.gz-P /opt

·        安装依赖包

yum -yinstall python-simplejson python-prettytable python-argparse python-nose1.1python-httplib2 python-virtualenv MySQL-python

·        解压并安装PYTHON-NOVACLIENT库

cd /opt

tar xfpython-novaclient-2012.1.tar.gz

cdpython-novaclient-2012.1

pythonsetup.py install

rm -f../python-novaclient-2012.1.tar.gz

4.4 GLANCE镜像存储服务安装

·        下载源码包

wgethttps://launchpad.net/glance/essex/2012.1/+download/glance-2012.1.tar.gz -P /opt

·        安装依赖包

yuminstall -y python-anyjson python-kombu

pipinstall xattr==0.6.0 iso8601==0.1.4 pysendfile==2.0.0 pycrypto==2.3 wsgirefboto==2.1.1

·        解压并安装GLANCE镜像存储服务

cd /opt

tar xfglance-2012.1.tar.gz

cdglance-2012.1

pythonsetup.py install

rm -f../glance-2012.1.tar.gz

4.5 NOVA计算服务安装

·        下载源码包

wgethttps://launchpad.net/nova/essex/2012.1/+download/nova-2012.1.tar.gz -P /opt

·        安装依赖包

yuminstall -y python-amqplib python-carrot python-lockfile python-gflagspython-netaddr python-suds python-paramiko python-feedparser python-eventletpython-greenlet python-paste

pipinstall Cheetah==2.4.4 python-daemon==1.5.5 Babel==0.9.6 routes==1.12.3lxml==2.3 PasteDeploy==1.5.0 sqlalchemy-migrate==0.7.2 SQLAlchemy==0.7.3WebOb==1.0.8

·        解压并安装NOVA计算服务

cd /opt

tar xfnova-2012.1.tar.gz

cdnova-2012.1

pythonsetup.py install

rm -f../nova-2012.1.tar.gz

4.6 NOVA计算服务配置

·        建立NOVA服务配置文件存放目录

mkdir/etc/nova

·        建立NOVA服务启动用户

useradd-s /sbin/nologin -m -d /var/log/nova nova

·        在/etc/nova建立nova.conf作为NOVA服务配置文件,内容如下:

[DEFAULT]

auth_strategy=keystone

bindir=/usr/bin

pybasedir=/var/lib/nova

connection_type=libvirt

debug=True

lock_path=/var/lock/nova

log-dir=/var/log/nova

my_ip=60.12.206.105

ec2_host=$my_ip

ec2_path=/services/Cloud

ec2_port=8773

ec2_scheme=http

glance_host=$my_ip

glance_port=9292

glance_api_servers=$glance_host:$glance_port

image_service=nova.image.glance.GlanceImageService

metadata_host=$my_ip

metadata_port=8775

network_manager=nova.network.manager.FlatDHCPManager

osapi_path=/v1.1/

osapi_scheme=http

rabbit_host=192.168.1.2

rabbit_password=openstack

rabbit_port=5672

rabbit_userid=guest

root_helper=sudo

s3_host=$my_ip

s3_port=3333

sql_connection=mysql://root:openstack@192.168.1.2/nova

state_path=/var/lib/nova

use_ipv6=False

use-syslog=False

verbose=True

ec2_listen=$my_ip

ec2_listen_port=8773

metadata_listen=$my_ip

metadata_listen_port=8775

osapi_compute_listen=$my_ip

osapi_compute_listen_port=8774

osapi_volume_listen=$my_ip

osapi_volume_listen_port=8776

keystone_ec2_url=http://$my_ip:5000/v2.0/ec2tokens

dhcpbridge=$bindir/nova-dhcpbridge

dhcpbridge_flagfile=/etc/nova/nova.conf

public_interface=eth0

routing_source_ip=60.12.206.99

fixed_range=10.0.0.0/24

flat_interface=eth1

flat_network_bridge=b41

force_dhcp_release=True

libvirt_type=kvm

libvirt_use_virtio_for_bridges=True

iscsi_helper=ietadm

iscsi_ip_address=$my_ip

novncproxy_base_url=http://$my_ip:6080/vnc_auto.html

·        在/etc/init.d/下建立名为nova-compute的NOVA-COMPUTE服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-compute  OpenStack NovaCompute Worker

#

#chkconfig:   - 20 80

#description: Compute workers manage computing instances on host  \

#               machines. Through the API,commands are dispatched \

#               to compute workers to:                             \

#               * Run instances                                    \

#               * Terminate instances                              \

#               * Reboot instances                                 \

#               * Attach volumes                                   \

#               * Detach volumes                                   \

#               * Get console output

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova Compute Worker

#Description: Compute workers manage computing instances on host

#               machines. Through the API,commands are dispatched

#               to compute workers to:

#               * Run instances

#               * Terminate instances

#               * Reboot instances

#               * Attach volumes

#               * Detach volumes

#               * Get console output

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=compute

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch$lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        在/etc/init.d/下建立名为nova-network的NOVA-NETWORK服务启动脚本,内容如下:

#!/bin/sh

#

#openstack-nova-network  OpenStack NovaNetwork Controller

#

#chkconfig:   - 20 80

#description: The Network Controller manages the networking resources \

#              on host machines. The API serverdispatches commands    \

#              through the message queue, whichare subsequently       \

#              processed by NetworkControllers.                       \

#              Specific operations include:                            \

#              * Allocate Fixed IPAddresses                           \

#              * Configuring VLANs forprojects                        \

#              * Configuring networks forcompute nodes                \

 

###BEGIN INIT INFO

#Provides:

#Required-Start: $remote_fs $network $syslog

#Required-Stop: $remote_fs $syslog

#Default-Stop: 0 1 6

#Short-Description: OpenStack Nova Network Controller

#Description: The Network Controller manages the networking resources

#              on host machines. The API serverdispatches commands

#              through the message queue, whichare subsequently

#              processed by Network Controllers.

#              Specific operations include:

#              * Allocate Fixed IP Addresses

#              * Configuring VLANs for projects

#              * Configuring networks forcompute nodes

### ENDINIT INFO

 

./etc/rc.d/init.d/functions

 

suffix=network

prog=openstack-nova-$suffix

exec="/usr/bin/nova-$suffix"

config="/etc/nova/nova.conf"

pidfile="/var/run/nova/nova-$suffix.pid"

logfile="/var/log/nova/$suffix.log"

 

[ -e/etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

 

lockfile=/var/lock/nova/$prog

 

start(){

    [ -x $exec ] || exit 5

    [ -f $config ] || exit 6

    echo -n $"Starting $prog: "

    daemon --user nova --pidfile $pidfile"$exec --config-file=$config --logfile=$logfile &>/dev/null &echo \$! > $pidfile"

    retval=$?

    echo

    [ $retval -eq 0 ] && touch $lockfile

    return $retval

}

 

stop() {

    echo -n $"Stopping $prog: "

    killproc -p $pidfile $prog

    retval=$?

    echo

    [ $retval -eq 0 ] && rm -f$lockfile

    return $retval

}

 

restart(){

    stop

    start

}

 

reload(){

    restart

}

 

force_reload(){

    restart

}

 

rh_status(){

    status -p $pidfile $prog

}

 

rh_status_q(){

    rh_status >/dev/null 2>&1

}

 

case"$1" in

    start)

        rh_status_q && exit 0

        $1

        ;;

    stop)

        rh_status_q || exit 0

        $1

        ;;

    restart)

        $1

        ;;

    reload)

        rh_status_q || exit 7

        $1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart|try-restart)

        rh_status_q || exit 0

        restart

        ;;

    *)

        echo $"Usage: $0{start|stop|status|restart|condrestart|try-restart|reload|force-reload}"

        exit 2

esac

exit $?

·        配置sudo

在/etc/sudoers.d/建立nova文件,内容如下:

Defaults:nova!requiretty

 

Cmnd_AliasNOVACMDS = /bin/aoe-stat,                            \

                      /bin/chmod,                               \

                      /bin/chmod/var/lib/nova/tmp/*/root/.ssh, \

                      /bin/chown,                               \

                      /bin/chown/var/lib/nova/tmp/*/root/.ssh, \

                      /bin/dd,                                  \

                      /bin/kill,                                \

                      /bin/mkdir,                               \

                      /bin/mount,                               \

                      /bin/umount,                              \

                      /sbin/aoe-discover,                       \

                      /sbin/ifconfig,                           \

                      /sbin/ip,                                 \

                     /sbin/ip6tables-restore,                 \

                     /sbin/ip6tables-save,                    \

                      /sbin/iptables,                           \

                      /sbin/iptables-restore,                   \

                      /sbin/iptables-save,                      \

                      /sbin/iscsiadm,                           \

                      /sbin/kpartx,                             \

                      /sbin/losetup,                            \

                      /sbin/lvcreate,                           \

                      /sbin/lvdisplay,                          \

                      /sbin/lvremove,                           \

                      /sbin/pvcreate,                           \

                      /sbin/route,                              \

                      /sbin/tune2fs,                            \

                      /sbin/vconfig,                            \

                      /sbin/vgcreate,                           \

                      /sbin/vgs,                                \

                      /usr/bin/fusermount,                      \

                      /usr/bin/guestmount,                      \

                      /usr/bin/socat,                           \

                      /bin/cat,                           \

                      /usr/bin/tee,                             \

                      /usr/bin/qemu-nbd,                        \

                      /usr/bin/virsh,                           \

                      /usr/sbin/brctl,                          \

                      /usr/sbin/dnsmasq,                        \

                      /usr/sbin/ietadm,                         \

                      /usr/sbin/radvd,                          \

                      /usr/sbin/tgtadm,                         \

                      /usr/sbin/vblade-persist

 

nova ALL= (root) NOPASSWD: SETENV: NOVACMDS

chmod0440 /etc/sudoers.d/nova

·        配置polkit策略

在/etc/polkit-1/localauthority/50-local.d/建立50-nova.pkla,内容如下:

[Allownova libvirt management permissions]

Identity=unix-user:nova

Action=org.libvirt.unix.manage

ResultAny=yes

ResultInactive=yes

ResultActive=yes

·        配置启动脚本:

chmod755 /etc/init.d/nova-compute

chmod755 /etc/init.d/nova-network

mkdir/var/run/nova

mkdir -p/var/lib/nova/instances

mkdir/var/lock/nova

chownnova:root /var/run/nova

chown -Rnova:nova /var/lib/nova

chownnova:root /var/lock/nova

·        配置MYSQL数据库

在控制节点mysql执行如下语句:

grantall on nova.* to root@'192.168.1.%' identified by 'openstack';

·        启动NOVA相关服务

/etc/init.d/nova-computestart

/etc/init.d/nova-networkstart

·        更改iptables允许vnc连接

iptables-I INPUT -d 60.12.206.99 -p tcp -m multiport --dports 5900:6000 -j ACCEPT

Author:趣云团队-yz