建立一个基于cloudstack+GlusterFS的测试环境
来源:互联网 发布:逆战出现数据异常 编辑:程序博客网 时间:2024/05/21 05:27
This is an example of how to configure an environment where you can test CloudStack and Gluster. It uses two machines on the same LAN, one acts as a KVM hypervisor and the other as storage and management server. Because the (virtual) networking in the hypervisor is a little more complex than the networking on the management server, the hypervisor will be setup with an OpenVPN connection so that the local LAN is not affected with ‘foreign’ network traffic.
I am not a CloudStack specialist, so this configuration may not be optimal for real world usage. It is the intention to be able to test CloudStack and its Gluster integration in existing networks. The CloudStack installation and configuration done is suitable for testing and development systems, for production environments it is highly recommended to follow the CloudStack documentation instead.
.----------------. .-------------------. | | | | | KVM Hypervisor | <------- LAN -------> | Management Server | | | ^-- OpenVPN --^ | | '----------------' '-------------------'agent.cloudstack.tld storage.cloudstack.tld
Both systems have one network interface with a static IP-address. In the LAN, other IP-addresses can not be used. This makes it difficult to access virtual machines, but that does not matter too much for this testing.
Both systems need a basic installation:
- Red Hat Enterprise Linux 6.5 (CentOS 6.5 should work too)
- Fedora EPEL enabled (howto install epel-release)
- enable ssh access
- SELinux in permissive mode (or disabled)
- firewall enabled, but not restricting anything
- Java 1.7 from the standard java-1.7.0-openjdk packages (not Java 1.6)
On the hypervisor, an additional (internal only) bridge needs to be setup. This bridge will be used for providing IP-addresses to the virtual machines. Each virtual machine seems to need at least 3 IP-addresses. This is a default in CloudStack. This example uses virtual networks 192.168.N.0/24, where N is 0 to 4.
Configuration for the main cloudbr0 device:
#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0DEVICE=cloudbr0TYPE=BridgeONBOOT=yesBOOTPROTO=staticIPADDR=192.168.0.1NETMASK=255.255.255.0NM_CONTROLLED=no
And the additional IP-addresses on the cloudbr0 bridge (create 4 files, replace N by 1, 2, 3 and 4):
#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0:NDEVICE=cloudbr0:NONBOOT=yesBOOTPROTO=staticIPADDR=192.168.N.1NETMASK=255.255.255.0NM_CONTROLLED=no
Enable the new cloudbr0 bridge with all its IP-addresses:
# ifup cloudbr0
Any of the VMs that have a 192.168.*.* address, should be able to get to the real LAN, and ultimately also the internet. Enabling NAT for the internal virtual networks is the easiest:
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j MASQUERADE# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j MASQUERADE# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.3.0/24 -j MASQUERADE# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.4.0/24 -j MASQUERADE# service iptables save
The hypervisor will need to be setup to act as a gateway to the virtual machines on the cloudbr0 bridge. In order to so do, avery basic OpenVPN service does the trick:
# yum install openvpn# openvpn --genkey --secret /etc/openvpn/static.key# cat << EOF > /etc/openvpn/server.confdev tunifconfig 192.168.200.1 192.168.200.2secret static.keyEOF# chkconfig openvpn on# service openvpn start
On the management server, it is needed to configure OpenVPN as a client, so that routing to the virtual networks is possible:
# yum install openvpn# cat << EOF > /etc/openvpn/client.confremote real-hostname-of-hypervisor.example.netdev tunifconfig 192.168.200.2 192.168.200.1secret static.keyEOF# scp real-hostname-of-hypervisor.example.net:/etc/openvpn/static.key /etc/openvpn# chkconfig opennvpn on# service openvpn start
In /etc/hosts (on both the hypervisor and management server) the internal hostnames for the environment should be added:
#file: /etc/hosts192.168.200.1 agent.cloudstack.tld192.168.200.2 storage.cloudstack.tld
The hypervisor will also function as a DNS-server for the virtual machines. The easiest is to use dnsmasq which uses /etc/hosts and /etc/resolv.conf for resolving:
# yum install dnsmasq# chkconfig dnsmasq on# service dnsmasq start
The management server is also used as a Gluster Storage Server. Therefor it needs to have some Gluster packages:
# wget -O /etc/yum.repo.d/glusterfs-epel.repo \http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/glusterfs-epel.repo# yum install glusterfs-server# vim /etc/glusterfs/glusterd.vol# service glusterd restart
Create two volumes where CloudStack will store disk images. Before starting the volumes, apply the required settings too. Note that the hostname that holds the bricks should be resolvable by the hypervisor and the Secondary Storage VMs. This example does not show how to create volumes for production usage, do not create volumes like this for anything else than testing and scratch data.
# mkdir -p /bricks/primary/data# mkdir -p /bricks/secondary/data# gluster volume create primary storage.cloudstack.tld:/bricks/primary/data# gluster volume set primary storage.owner-uid 36# gluster volume set primary storage.owner-gid 36# gluster volume set primary server.allow-insecure on# gluster volume set primary nfs.disable true# gluster volume start primary# gluster volume create secondary storage.cloudstack.tld:/bricks/secondary/data# gluster volume set secondary storage.owner-uid 36# gluster volume set secondary storage.owner-gid 36# gluster volume start secondary
When the preparation is all done, it is time to install Apache CloudStack. It is planned to have support for Gluster in CloudStack 4.4. At the moment not all required changes are included in the CloudStack git repository. Therefor, is is needed to build the RPM packages from the Gluster Forge repository where the development is happening. On a system running RHEL-6.5, checkout the sources and build the packages (this needs a standard CloudStack development environment, including java-1.7.0-openjdk-devel, Apache Maven and others):
$ git clone git://forge.gluster.org/cloudstack-gluster/cloudstack.git$ cd cloudstack$ git checkout -t -b wip/master/gluster$ cd packaging/centos63$ ./package.sh
In the end, these packages should have been build:
- cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-usage-4.4.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-cli-4.4.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm
On the management server, install the following packages:
# yum localinstall cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm \cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm
Install and configure the database:
# yum install mysql-server# chkconfig mysqld on# service mysqld start# vim /etc/cloudstack/management/classpath.conf# cloudstack-setup-databases cloud:secret --deploy-as=root:
Install the systemvm templates:
# mount -t nfs storage.cloudstack.tld:/secondary /mnt# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \-m /mnt \-h kvm \-u http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-master-kvm.qcow2.bz2# umount /mnt
The management server is now prepared, and the webui can get configured:
# cloudstack-setup-management
On the hypervisor, install the following additional packages:
# yum install qemu-kvm libvirt glusterfs-fuse# yum localinstall cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm# cloudstack-setup-agent
Make sure that in /etc/cloudstack/agent/agent.properties the right NICs are being used:
guest.network.device=cloudbr0private.bridge.name=cloudbr0private.network.device=cloudbr0network.direct.device=cloudbr0public.network.device=cloudbr0
Go to the CloudStack webinterface, this should be running on the management server: http://real-hostname-of-mgmt.example.net:8080/client The default username/password is: admin / password
It is easiest to skip the configuration wizard (not sure if that supports Gluster already). When the normal interface is shown, under ‘Infrastructure’ a new ‘Zone’ can get added. The Zone wizard will need the following input:
- DNS 1: 192.168.0.1
- Internal DNS 1: 192.168.0.1
- Hypervisor: KVM
Under POD, use these options:
- Reserved system gateway: 192.168.0.1
- Reserved system netmask: 255.255.255.0
- Start reserved system IP: 192.168.0.10
- End reserved system IP: 192.168.0.250
Next the network config for the virtual machines:
- Guest gateway: 192.168.1.1
- Guest system netmask: 255.255.255.0
- Guest start IP: 192.168.1.10
- Guest end IP: 192.168.1.250
Primary storage:
- Type: Gluster
- Server: storage.cloudstack.tld
- Volume: primary
Secondary Storage:
- Type: nfs
- Server: storage.cloudstack.tld
- path: /secondary
Hypervisor agent:
- hostname: agent.cloudstack.tld
- username: root
- password: password
If this all succeeded, the newly created Zone can get enabled. After a while, there should be two system VMs listed in the Infrastructure. It is possible to log in on these system VMs and check if all is working. To do so, log in over SSH on the hypervisor and connect to the VMs through libvirt:
# virsh list Id Name State---------------------------------------------------- 1 s-1-VM running 2 v-2-VM running# virsh console 1Connected to domain s-1-VMEscape character is ^]Debian GNU/Linux 7 s-1-VM ttyS0s-1-VM login: rootPassword: password...root@s-1-VM:~#
Log out from the shell, and press CTRL+] to disconnect from the console.
To verify that this VM indeed runs with the QEMU+libgfapi integration, check the log file that libvirt writes and confirm that there is a -drive with a glusterfs+tcp:// URL in /var/log/libvirt/qemu/s-1-VM.log:
... /usr/libexec/qemu-kvm -name s-1-VM ... -drive file=gluster+tcp://storage.cloudstack.tld:24007/primary/d69
- 分享
- 建立一个基于cloudstack+GlusterFS的测试环境
- 基于 GlusterFS 的高可用 MySQL 数据库系统测试
- Sandbox测试环境的建立
- 【软件测试】测试环境的建立
- 基于WINDOWS下的MinGW的Qt-4.8.4开发环境建立及测试
- [转]使用CPPUNIT如何建立一个基于MFC的GUI测试框架
- 使用CPPUNIT如何建立一个基于MFC的GUI测试框架
- CloudStack基于项目的资源访问控制
- minigui基于vc60的开发环境建立
- Proxmark3 基于 ubuntu 的编译环境建立
- 建立aiCache的drupal测试环境
- DTM测试的环境建立简介
- 使用CppUnit建立简单的测试环境
- glusterfs 环境需要开放的端口
- --建立测试环境
- 建立 SSL 测试环境
- 多个cloudstack环境公用一个NFS服务器
- 如何建立一个 XML 的开发环境
- 听那个腌杂一叙
- JMX学习笔记(二):MBean
- 防止手机中查看网页分辨率很大,网页可以放大缩小的方法
- OC图片的异步加载
- PagerAdapter中的isViewFromObject()方法有什么用?
- 建立一个基于cloudstack+GlusterFS的测试环境
- C++是什么?
- 元件使用原则感想
- 智能手机Smartphone开发导语
- MyEclipse10 的正确破解方法
- python基础教程_学习笔记13:标准库:一些最爱——sys
- 朋友发来叫我翻译的。。。求大神翻译
- 学习使用Jmeter做压力测试(三)--数据库测试
- 百度导航 iOS SDK的坐标转换代码示例