OpenStack Grizzly Install Guide----VLAN Mode(3 Node)

来源:互联网 发布:qq飞车迈凯伦数据 编辑:程序博客网 时间:2024/05/16 01:59




Controller Node: iDataPlex M2       | eth0:10.0.1.100|eth1:9.186.91.128|eth2:x

Network Node: System x3550        | eth0:10.0.1.101|eth1:10.20.20.52|eth2:9.186.91.130

Compute Node: System x3950      | eth0:10.0.1.111|eth1:10.20.20.53|eht2:x


OS: Ubuntu 12.04 server

-------------------------------------------------------------------------------------------------------------------------------------------------------------------

1.Controller Node

1.1.Preparing Ubuntu

1) Add Grizzly repositories [Only for Ubuntu 12.04]:

apt-get install -y ubuntu-cloud-keyringecho deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list

2) Update system:

apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y
Note:In here you should reboot OS, cause 'dist-upgrade' update kernel!
1.2. Networking

<span style="FONT-SIZE: 15px">#for OpenStack managementauto eth0iface eth0 inet staticaddress 10.0.1.100netmask 255.255.255.0# </span><span style="font-size:14px;">For Exposing OpenStack API over the internet</span><span style="FONT-SIZE: 15px">auto eth1iface eth1 inet staticaddress 9.186.91.128netmask 255.255.252.0network 9.186.88.0broadcast 9.186.91.255gateway 9.186.88.1dns-nameservers 9.0.***dns-search crl.***.com</span>
<span style="FONT-SIZE: 15px">service networking restart</span>
1.3.MySQL & RabbitMQ
1) Install MySQL:

apt-get install -y mysql-server python-mysqldb
2) Configure mysql to accept all incoming requests:
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnfservice mysql restart
1.4.RabbitMQ
1) Install RabbitMQ:

apt-get install -y rabbitmq-server

2) Install NTP service:

apt-get install -y ntp

3) Create these databases:

mysql -u root -p#KeystoneCREATE DATABASE keystone;GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';#GlanceCREATE DATABASE glance;GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';#QuantumCREATE DATABASE quantum;GRANT ALL ON quantum.* TO 'quantumUser'@'%' IDENTIFIED BY 'quantumPass';#NovaCREATE DATABASE nova;GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';#CinderCREATE DATABASE cinder;GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';quit;

1.5.Others
1) Install other services:

apt-get install -y vlan bridge-utils
2) Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf# To save you from rebooting, perform the followingsysctl net.ipv4.ip_forward=1

1.6.Keystone
1)

apt-get install -y keystone
2) Adapt the connection attribute in the /etc/keystone/keystone.conf to the new database:
vi  /etc/keystone/keystone.confconnection = mysql://keystoneUser:keystonePass@10.0.1.100/keystone
3) Restart the identity service then synchronize the database:

service keystone restartkeystone-manage db_sync

4) Fill up the keystone database using the two scripts:
#Modify the **HOST_IP** and **EXT_HOST_IP** variables before executing the scriptswget https://raw.github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/OVS_MultiNode/KeystoneScripts/keystone_basic.shwget https://raw.github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/OVS_MultiNode/KeystoneScripts/keystone_endpoints_basic.shchmod +x keystone_basic.shchmod +x keystone_endpoints_basic.sh./keystone_basic.sh./keystone_endpoints_basic.sh

5) Create a simple credential file and load it so you won't be bothered later:

vi creds#Paste the following:export OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=admin_passexport OS_AUTH_URL="http://9.186.91.128:5000/v2.0/"# Load it:source creds

6) Test Keystone:

keystone user-list

2.7. Glance
1) 

apt-get install -y glance

2) Update /etc/glance/glance-api-paste.ini with:

[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factorydelay_auth_decision = trueauth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = glanceadmin_password = service_pass

3) Update the /etc/glance/glance-registry-paste.ini with:

[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = glanceadmin_password = service_pass

4) Update /etc/glance/glance-api.conf with:

sql_connection = mysql://glanceUser:glancePass@10.0.1.100/glance[paste_deploy]flavor = keystone5) Update the /etc/glance/glance-registry.conf with:sql_connection = mysql://glanceUser:glancePass@10.0.1.100/glance[paste_deploy]flavor = keystone

6) Restart the glance-api and glance-registry services:

service glance-api restart; service glance-registry restart

7) Synchronize the glance database:
glance-manage db_sync

8) To test Glance, upload the cirros cloud image directly from the internet:

glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.0-x86_64-disk.imgOR:glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

9) Now list the image to see what you have just uploaded:
glance image-list

1.8.Quantum
1) 
apt-get install -y quantum-server

2)  Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

#Under the database section[DATABASE]sql_connection = mysql://quantumUser:quantumPass@10.0.1.100/quantum#Under the OVS section[OVS]tenant_network_type=vlannetwork_vlan_ranges = physnet1:1:4094

3) Edit /etc/quantum/api-paste.ini :

[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_pass

4) Update the /etc/quantum/quantum.conf:

[keystone_authtoken]auth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_passsigning_dir = /var/lib/quantum/keystone-signing

5) Restart the quantum server:
service quantum-server restart

1.9. Nova
1) 

apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor

2) Now modify authtoken section in the /etc/nova/api-paste.ini file to this:

[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = novaadmin_password = service_passsigning_dirname = /tmp/keystone-signing-nova# Workaround for https://bugs.launchpad.net/nova/+bug/1154809auth_version = v2.0

3) Modify the /etc/nova/nova.conf like this:

[DEFAULT]logdir=/var/log/novastate_path=/var/lib/novalock_path=/run/lock/novaverbose=Trueapi_paste_config=/etc/nova/api-paste.inicompute_scheduler_driver=nova.scheduler.simple.SimpleSchedulerrabbit_host=10.0.1.100nova_url=http://10.0.1.100:8774/v1.1/sql_connection=mysql://novaUser:novaPass@10.0.1.100/novaroot_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf# Authuse_deprecated_auth=falseauth_strategy=keystone# Imaging serviceglance_api_servers=10.0.1.100:9292image_service=nova.image.glance.GlanceImageService# Vnc configurationnovnc_enabled=truenovncproxy_base_url=http://9.186.91.128:6080/vnc_auto.htmlnovncproxy_port=6080vncserver_proxyclient_address=10.0.1.100vncserver_listen=0.0.0.0# Network settingsnetwork_api_class=nova.network.quantumv2.api.APIquantum_url=http://10.0.1.100:9696quantum_auth_strategy=keystonequantum_admin_tenant_name=servicequantum_admin_username=quantumquantum_admin_password=service_passquantum_admin_auth_url=http://10.0.1.100:35357/v2.0libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlinuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver#If you want Quantum + Nova Security groupsfirewall_driver=nova.virt.firewall.NoopFirewallDriversecurity_group_api=quantum#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver#Metadataservice_quantum_metadata_proxy = Truequantum_metadata_proxy_shared_secret = helloOpenStack# Compute #compute_driver=libvirt.LibvirtDriver# Cinder #volume_api_class=nova.volume.cinder.APIosapi_volume_listen_port=5900

4) Synchronize database:

nova-manage db sync

5) Restart nova-* services:

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done

6) Check for the smiling faces on nova-* services to confirm your installation:
nova-manage service list

1.10.Cinder

1)

apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms

2) Configure the iscsi services  &&Restart  services:

sed -i 's/false/true/g' /etc/default/iscsitargetservice iscsitarget startservice open-iscsi start

3) Configure /etc/cinder/api-paste.ini like the following:

[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryservice_protocol = httpservice_host = 9.186.91.128service_port = 5000auth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = cinderadmin_password = service_passsigning_dir = /var/lib/cinder

4) Edit the /etc/cinder/cinder.conf to:

[DEFAULT]rootwrap_config=/etc/cinder/rootwrap.confsql_connection = mysql://cinderUser:cinderPass@10.0.1.100/cinderapi_paste_config = /etc/cinder/api-paste.iniiscsi_helper=ietadmvolume_name_template = volume-%svolume_group = cinder-volumesverbose = Trueauth_strategy = keystoneiscsi_ip_address=10.0.1.100

5) Then, synchronize database:

cinder-manage db sync

6) Restart the cinder services:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

7) Verify if cinder services are running:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done

1.11. Horizon

1) 

apt-get install -y openstack-dashboard memcached

2) If you don't like the OpenStack ubuntu theme, you can remove the package to disable it:(up to you!^_^)

dpkg --purge openstack-dashboard-ubuntu-theme

3) Reload Apache and memcached:

service apache2 restart; service memcached restart

2. Network Node

2.1. Preparing the Node:

1) Add Grizzly repositories [Only for Ubuntu 12.04]:

apt-get install -y ubuntu-cloud-keyringecho deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list

2) Update OS:

apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y

3) Install ntp service:

apt-get install -y ntp

4) Configure the NTP server to follow the controller node:

#Comment the ubuntu NTP serverssed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf#Set the network node to follow up your conroller nodesed -i 's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.confservice ntp restart

5) Install other services:

apt-get install -y vlan bridge-utils

6)  Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf# To save you from rebooting, perform the followingsysctl net.ipv4.ip_forward=1

2.2.Networking

auto eth0iface eth0 inet staticaddress 10.0.1.101netmask 255.255.255.0auto eth1iface eth1 inet staticaddress 10.20.20.52netmask 255.255.255.0auto eth2iface eth2 inet staticaddress 9.186.91.130netmask 255.255.252.0network 9.186.88.0broadcast 9.186.91.255gateway 9.186.88.1dns-nameservers 9.0.****dns-search crl.***.com

2.3. OpenVSwitch (Part1)
1) 

apt-get install -y openvswitch-switch openvswitch-datapath-dkms

2) Create the bridges:

#br-int will be used for VM integrationovs-vsctl add-br br-intovs-vsctl add-br br-exovs-vsctl add-br br-eth1

2.4. Quantum

1)  Install the Quantum openvswitch agent, l3 agent and dhcp agent:

apt-get -y install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent

2)  Edit /etc/quantum/api-paste.ini:

[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_pass

3 ) Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

#Under the database section[DATABASE]sql_connection = mysql://quantumUser:quantumPass@10.0.1.100/quantum#Under the OVS section[OVS]tenant_network_type=vlannetwork_vlan_ranges = physnet1:1:4094bridge_mappings = physnet1:br-eth1#Firewall driver for realizing quantum security group function[SECURITYGROUP]firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

4) Update /etc/quantum/metadata_agent.ini:

# The Quantum user information for accessing the Quantum API.auth_url = http://10.0.1.100:35357/v2.0auth_region = RegionOneadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_pass# IP address used by Nova metadata servernova_metadata_ip = 10.0.1.100# TCP Port used by Nova metadata servernova_metadata_port = 8775metadata_proxy_shared_secret = helloOpenStack

5) Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:

rabbit_host = 10.0.1.100#And update the keystone_authtoken section[keystone_authtoken]auth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_passsigning_dir = /var/lib/quantum/keystone-signing

6) Edit /etc/sudoers.d/quantum_sudoers to give it full access like this (This is unfortunatly mandatory,After, You must change back----Chmod 440 /etc/sudoers.d/quantum_sudoers)

vi /etc/sudoers/sudoers.d/quantum_sudoers#Modify the quantum userquantum ALL=NOPASSWD: ALL

7) Restart all the services:

cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done

2.5. OpenVSwitch (Part2)
1) Edit the eth1&eth2 in /etc/network/interfaces to become like this:

auto eth1iface eth1 inet manualup ifconfig $IFACE 0.0.0.0 upup ip link set $IFACE promisc ondown ip link set $IFACE promisc offdown ifconfig $IFACE downauto br-eth1iface br-eth1 inet staticaddress 10.20.20.52netmask 255.255.255.0auto eth2iface eth2 inet manualup ifconfig $IFACE 0.0.0.0 upup ip link set $IFACE promisc ondown ip link set $IFACE promisc offdown ifconfig $IFACE downauto br-exiface br-ex inet staticaddress 9.186.91.130netmask 255.255.252.0network 9.186.88.0broadcast 9.186.91.255gateway 9.186.88.1dns-nameservers 9.0.***dns-search crl.***.com

2) Add the eth1 to the br-eth1, Add the eth2 to the br-ex:

#br-eth1 will be used for VM configurationovs-vsctl add-port br-eth1 eth1#br-ex is used to make to VM accessible from the internetovs-vsctl add-port br-ex eth2

3. Compute Node

3.1. Preparing the Node

1) Add Grizzly repositories [Only for Ubuntu 12.04]:

apt-get install -y ubuntu-cloud-keyringecho deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list

2) Update your system:

apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y

3) Reboot (you might have new kernel)
4) Install ntp service:

apt-get install -y ntp

5) Configure the NTP server to follow the controller node:

#Comment the ubuntu NTP serverssed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf#Set the compute node to follow up your conroller nodesed -i 's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.confservice ntp restart

6) Install other services:

apt-get install -y vlan bridge-utils

7) Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf# To save you from rebooting, perform the followingsysctl net.ipv4.ip_forward=1


3.2 Networking

</pre><pre>auto eth0iface eth0 inet staticaddress 10.0.1.111netmask 255.255.255.0dns-nameservers 9.0.146.50auto eth1iface eth1 inet staticaddress 10.20.20.53netmask 255.255.255

3.3. KVM
1) make sure that your hardware enables virtualization:
</pre><span style="font-size:18px;"></span><pre class="plain" name="code">apt-get install -y cpu-checkerkvm-ok
</pre><pre> 
2) Normally you would get a good response. Now, move to install kvm and configure it:
 
apt-get install -y kvm libvirt-bin pm-utils

3) Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to:
 
 
cgroup_device_acl = ["/dev/null", "/dev/full", "/dev/zero","/dev/random", "/dev/urandom","/dev/ptmx", "/dev/kvm", "/dev/kqemu","/dev/rtc", "/dev/hpet","/dev/net/tun"]

4) Delete default virtual bridge:
virsh net-destroy defaultvirsh net-undefine default

5) Enable live migration by updating /etc/libvirt/libvirtd.conf file:
listen_tls = 0listen_tcp = 1auth_tcp = "none"

6) Edit libvirtd_opts variable in /etc/init/libvirt-bin.conf file:
env libvirtd_opts="-d -l"

7) Edit /etc/default/libvirt-bin file:
 
libvirtd_opts="-d -l"

8) Restart the libvirt service and dbus to load the new values:
 
service dbus restart && service libvirt-bin restart


3.4. OpenVSwitch

1) 
 
apt-get install -y openvswitch-switch openvswitch-datapath-dkms

2) Create the bridges:
 
#br-int will be used for VM integrationovs-vsctl add-br br-int#br-eth1 will be used for VM configurationovs-vsctl add-br br-eth1

3.5. Quantuam
1)
apt-get -y install quantum-plugin-openvswitch-agent

2) Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
 
#Under the database section[DATABASE]sql_connection = mysql://quantumUser:quantumPass@10.0.1.100/quantum#Under the OVS section[OVS]tenant_network_type=vlannetwork_vlan_ranges = physnet1:1:4094bridge_mappings = physnet1:br-eth1#Firewall driver for realizing quantum security group function[SECURITYGROUP]firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

3) Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:
rabbit_host = 10.0.1.100#And update the keystone_authtoken section[keystone_authtoken]auth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_passsigning_dir = /var/lib/quantum/keystone-signing

4) Edit the eth1in /etc/network/interfaces to become like this: 
auto br-eth1iface br-eth1 inet staticaddress 10.20.20.53netmask 255.255.255auto eth1iface eth1 inet manualup ifconfig $IFACE 0.0.0.0 upup ip link set $IFACE promisc ondown ip link set $IFACE promisc offdown ifconfig $IFACE down

5)  Add the eth1 to the br-eth1:
ovs-vsctl add-port br-eth1 eth1

6) Restart all the services:
service quantum-plugin-openvswitch-agent restart

3.6. Nova

1) 
apt-get install -y nova-compute-kvm

2) Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.0.1.100auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = novaadmin_password = service_passsigning_dirname = /tmp/keystone-signing-nova# Workaround for https://bugs.launchpad.net/nova/+bug/1154809auth_version = v2.0

3) Edit /etc/nova/nova-compute.conf file
[DEFAULT]libvirt_type=kvmlibvirt_ovs_bridge=br-intlibvirt_vif_type=ethernetlibvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlibvirt_use_virtio_for_bridges=True

4) Modify the /etc/nova/nova.conf like this:
[DEFAULT]logdir=/var/log/novastate_path=/var/lib/novalock_path=/run/lock/novaverbose=Trueapi_paste_config=/etc/nova/api-paste.inicompute_scheduler_driver=nova.scheduler.simple.SimpleSchedulerrabbit_host=10.0.1.100nova_url=http://10.0.1.100:8774/v1.1/sql_connection=mysql://novaUser:novaPass@10.0.1.100/novaroot_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf# Authuse_deprecated_auth=falseauth_strategy=keystone# Imaging serviceglance_api_servers=10.0.1.100:9292image_service=nova.image.glance.GlanceImageService# Vnc configurationnovnc_enabled=truenovncproxy_base_url=http://9.186.91.128:6080/vnc_auto.htmlnovncproxy_port=6080vncserver_proxyclient_address=10.0.1.111   #!!!!!!vncserver_listen=0.0.0.0# Network settingsnetwork_api_class=nova.network.quantumv2.api.APIquantum_url=http://10.0.1.100:9696quantum_auth_strategy=keystonequantum_admin_tenant_name=servicequantum_admin_username=quantumquantum_admin_password=service_passquantum_admin_auth_url=http://10.0.1.100:35357/v2.0libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlinuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver#If you want Quantum + Nova Security groupsfirewall_driver=nova.virt.firewall.NoopFirewallDriversecurity_group_api=quantum#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver#Metadataservice_quantum_metadata_proxy = Truequantum_metadata_proxy_shared_secret = helloOpenStack# Compute #compute_driver=libvirt.LibvirtDriver# Cinder #volume_api_class=nova.volume.cinder.APIosapi_volume_listen_port=5900cinder_catalog_info=volume:cinder:internalURL

5) Restart nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done

6) Check for the smiling faces on nova-* services to confirm your installation:
nova-manage service list


4. launch your first VM:

To start your first VM, we first need to create a new tenant, user, internal and external network. SSH to your controller node and perform the following.
1) Create a new tenant:
keystone tenant-create --name project_onekeystone role-list

2) Create a new user and assign the member role to it in the new tenant (keystone role-list to get the appropriate id):
keystone user-create --name=user_one --pass=user_one --tenant-id $put_id_of_project_one --email=user_one@domain.comkeystone user-role-add --tenant-id $put_id_of_project_one  --user-id $put_id_of_user_one --role-id $put_id_of_member_role

3) Create a new network for the tenant:
quantum net-create --tenant-id $put_id_of_project_one net_proj_one --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 1024

4) Create a new subnet inside the new tenant network:
quantum subnet-create --tenant-id $put_id_of_project_one net_proj_one 192.168.1.0/24

5) Create a router for the new tenant:
quantum router-create --tenant-id $put_id_of_project_one router_proj_one
6) Add the router to the subnet:
 
quantum router-interface-add $put_router_proj_one_id_here $put_subnet_id_here

7) Create your external network with the tenant id belonging to the service tenant (keystone tenant-list to get the appropriate id)
keystone tenant-listquantum net-create --tenant-id $put_id_of_service_tenant ext_net --router:external=True

8) Create a subnet containing your floating IPs:
quantum subnet-create --tenant-id $put_id_of_service_tenant --allocation-pool start=9.186.91.131,end=9.186.91.191 --gateway 9.186.88.1 ext_net 9.186.88.100/22 --enable_dhcp=False

9) Set the router for the external network:
quantum router-gateway-set $put_router_proj_one_id_here $put_id_of_ext_net_here

VMs gain access to the metadata server locally present in the controller node via the external network. To create that necessary connection perform the following:
10) Get the IP address of router proj one:
 
quantum port-list -- --device_id <router_proj_one_id> --device_owner network:router_gateway

11) Add the following route on controller node only:
route add -net 192.168.1.0/24 gw $router_proj_one_IP