How to Setup OpenStack to use Local Disks for Instances

来源:互联网 发布:数据库修改语句怎么写 编辑:程序博客网 时间:2024/04/27 17:58

Quite often application requirements would demand using locally attached disks (or direct attached disks) forOpenStack compute instances. One such example is running virtual hadoop clusters via OpenStack.

A virtual hadoop cluster on OpenStack leveraging local disks will look something like this:

local_disk_arch

As you can see from the diagram above, a compute node performs an additional role of a storage node, serving local disks to the instances( Virtual Machines).

The split of OpenStack services will be like the following

Compute + Storage Node

  1. Nova-compute

  2. Cinder-volume

Controller Node

  1. Nova-api

  2. Cinder-scheduler

  3. Cinder-api

  4. Glance

  5. Neutron

In summary, we need to set up a compute node as a cinder volume node as well, so as to expose the local disks as cinder volumes. There are two cinder drivers which can be made use of – LVMVolumeDriver, BlockDeviceDriver.

I’ll be showing the usage of BlockDeviceDriver which allows using plain block devices with OpenStack instances.

Additionally, the following should be kept in mind:

  1. The default cinder quota might not be sufficient for real usage, and therefore, you need to change it accordingly.

  2. As of this writing, OpenStack doesn’t have a way to automatically ensure coexistence of instance, and cinder volume, on the same node. In other words, it could happen that the cinder volume that is being attached to an instance is from a remote node, ie. from a node not running the instance. This is depicted in Scenario-1. For applications, like hadoop we would ideally like to avoid scenario-1, where a disk is served to an instance over iSCSI. Although, network bandwidths have improved significantly, and in most of the cases iSCSI should be fine, but for hadoop workloads, this could get into potential scalability issues as the number of compute and data volume grows. Instead, we might want to have Scenario-2. There is a way to achieve this manually, and I’ll describe more details in subsequent sections.

 

scen-1

 

scen-2

 

 

In the following section, I’ll show you the configuration details by taking the example of IBMPowerKVM server as a compute node. The same configuration also applies to Intel/KVM compute node.

My test setup has the following disk layout.

  1. /dev/sda : hosts the PowerKVM OS as well nova directory (/var/lib/nova)
  2. /dev/sdb : local disk to be provided to OpenStack instances.

The disks can be backed by hardware RAID or the disks can be from direct attached storage enclosures. Point to note, is that, the disks are local to the host OS.

Further, my setup as the following compute and controller nodes

1. OpenStack controller node – icmnode1

2. OpenStack compute nodes (PowerKVM) – icmhost1 and icmhost2

 

Step -1. Create separate nova availability zone per compute node

Create a host aggregate and an availability zone.

[root@icmnode1 ~]# nova aggregate-create icmhost2 icmhost2+----+----------+-------------------+-------+------------------------------+| Id | Name     | Availability Zone | Hosts | Metadata                     |+----+----------+-------------------+-------+------------------------------+| 1  | icmhost2 | icmhost2          |       | 'availability_zone=icmhost2' |+----+----------+-------------------+-------+------------------------------+[root@icmnode1 ~]# nova host-list+------------------+-------------+----------+| host_name        | service     | zone     |+------------------+-------------+----------+| icmnode1.ibm.com | conductor   | internal || icmnode1.ibm.com | scheduler   | internal || icmnode1.ibm.com | consoleauth | internal || icmhost1.ibm.com | compute     | nova     || icmhost2.ibm.com | compute     | nova     |+------------------+-------------+----------+

Add the compute node to the host aggregate

[root@icmnode1 ~]# nova aggregate-add-host 1 icmhost2.ibm.comHost icmhost2.ibm.com has been successfully added for aggregate 1+----+----------+-------------------+--------------------+------------------------------+| Id | Name     | Availability Zone | Hosts              | Metadata                     |+----+----------+-------------------+--------------------+------------------------------+| 1  | icmhost2 | icmhost2          | 'icmhost2.ibm.com' | 'availability_zone=icmhost2' |+----+----------+-------------------+--------------------+------------------------------+

Check if availability zones are created or not.

[root@icmnode1 ~]# nova-manage service listBinary           Host             Zone     Status      State Updated_Atnova-conductor   icmnode1.ibm.com internal enabled :-) 2014-11-02 19:18:25nova-scheduler   icmnode1.ibm.com internal enabled :-) 2014-11-02 19:18:21nova-consoleauth icmnode1.ibm.com internal enabled :-) 2014-11-02 19:18:23nova-compute     icmhost1.ibm.com nova     enabled :-) 2014-11-02 19:18:25nova-compute     icmhost2.ibm.com icmhost2 enabled :-) 2014-11-02 19:18:24[root@icmnode1 ~]# nova aggregate-create icmhost1 icmhost1+----+----------+-------------------+-------+------------------------------+| Id | Name     | Availability Zone | Hosts | Metadata                     |+----+----------+-------------------+-------+------------------------------+| 2  | icmhost1 | icmhost1          |       | 'availability_zone=icmhost1' |+----+----------+-------------------+-------+------------------------------+[root@icmnode1 ~]# nova aggregate-add-host 2 icmhost1.ibm.comHost icmhost1.ibm.com has been successfully added for aggregate 2+----+----------+-------------------+--------------------+------------------------------+| Id | Name     | Availability Zone | Hosts              | Metadata                     |+----+----------+-------------------+--------------------+------------------------------+| 2  | icmhost1 | icmhost1          | 'icmhost1.ibm.com' | 'availability_zone=icmhost1' |+----+----------+-------------------+--------------------+------------------------------+[root@icmnode1 ~]# nova-manage service listBinary           Host             Zone     Status        State Updated_Atnova-conductor   icmnode1.ibm.com internal enabled :-)   2014-11-02 19:18:55nova-scheduler   icmnode1.ibm.com internal enabled :-)   2014-11-02 19:18:51nova-consoleauth icmnode1.ibm.com internal enabled :-)   2014-11-02 19:18:53nova-compute     icmhost1.ibm.com icmhost1 enabled :-)   2014-11-02 19:18:55nova-compute     icmhost2.ibm.com icmhost2 enabled :-)   2014-11-02 19:18:54

Step-2. Partition the local disk on compute node into smaller chunks

[root@icmhost1 ~]# gdisk /dev/sdbCommand (? for help): nPartition number (1-128, default 1): 1First sector (34-6694453214, default = 2048) or {+-}size{KMGTP}:Last sector (2048-6694453214, default = 6694453214) or {+-}size{KMGTP}: +300GCurrent type is 'Linux filesystem'Hex code or GUID (L to show codes, Enter = 8300):Changed type of partition to 'Linux filesystem'Command (? for help): nPartition number (2-128, default 2):First sector (34-6694453214, default = 629147648) or {+-}size{KMGTP}:Last sector (629147648-6694453214, default = 6694453214) or {+-}size{KMGTP}: +300GCurrent type is 'Linux filesystem'Hex code or GUID (L to show codes, Enter = 8300):Changed type of partition to 'Linux filesystem'

… and so on depending on how many chunks you want.

Command (? for help): pDisk /dev/sdb: 6694453248 sectors, 3.1 TiBLogical sector size: 512 bytesDisk identifier (GUID): A697C0D9-11DB-42D6-B7BC-6B5C65665049Partition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 6694453214Partitions will be aligned on 2048-sector boundariesTotal free space is 402997181 sectors (192.2 GiB)Number Start (sector) End (sector) Size Code Name1 2048 629147647 300.0 GiB 8300 Linux filesystem2 629147648 1258293247 300.0 GiB 8300 Linux filesystem3 1258293248 1887438847 300.0 GiB 8300 Linux filesystem4 1887438848 2516584447 300.0 GiB 8300 Linux filesystem5 2516584448 3145730047 300.0 GiB 8300 Linux filesystem6 3145730048 3774875647 300.0 GiB 8300 Linux filesystem7 3774875648 4404021247 300.0 GiB 8300 Linux filesystem8 4404021248 5033166847 300.0 GiB 8300 Linux filesystem9 5033166848 5662312447 300.0 GiB 8300 Linux filesystem10 5662312448 6291458047 300.0 GiB 8300 Linux filesystemCommand (? for help): wFinal checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTINGPARTITIONS!!Do you want to proceed? (Y/N): yOK; writing new GUID partition table (GPT) to /dev/sdb.The operation has completed successfully.

Step-3. Configure Cinder volume server. Ensure cinder availability zone matches with nova availability zone for the node

****</etc/cinder/cinder.conf>****

volume_driver=cinder.volume.drivers.block_device.BlockDeviceDriveravailable_devices='/dev/sdb1,/dev/sdb2,/dev/sdb3,/dev/sdb4,/dev/sdb5,/dev/sdb6,/dev/sdb7,/dev/sdb8,/dev/sdb9,/dev/sdb10'storage_availability_zone=icmhost1 

The storage availability zone and nova availability zone are the same – ‘icmhost1′

[root@icmnode1 ~]# cinder availability-zone-list+----------+-----------+| Name     | Status    |+----------+-----------+| icmhost1 | available || icmhost2 | available || nova     | available |+----------+-----------+

Step-4. Create cinder volumes

[root@icmhost2 ~]# cinder create 300 --display-name icmhost2-vol1 --availability-zone icmhost2+---------------------+--------------------------------------+| Property            | Value                                |+---------------------+--------------------------------------+| attachments         | []                                   || availability_zone   | icmhost2                             || bootable            | false                                || created_at          | 2014-11-03T03:46:58.052414           || display_description | None                                 || display_name        | icmhost2-vol1                        || encrypted           | False                                || id                  | b30a6f9b-a76a-4fb2-b979-82c74352f938 || metadata            | {}                                   || size                | 300                                  || snapshot_id         | None                                 || source_volid        | None                                 || status              | creating                             || volume_type         | None                                 |+---------------------+--------------------------------------+[root@icmhost2 ~]# cinder list+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+| ID                                   | Status    | Display Name   | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+| 225514af-ee99-4ddc-b691-84ec69b34f0d | available | icmhost2-vol3  | 300  | None        | false    |             || 234fa4db-41fa-4c22-8768-cce32af9f70b | available | icmhost1-vol7  | 300  | None        | false    |             || 241f2d5d-3783-42f3-8f30-eb8aa5d930d9 | available | icmhost2-vol5  | 300  | None        | false    |             || 25dae4c8-e786-4586-ac0e-f33e264143cf | available | icmhost2-vol8  | 300  | None        | false    |             || 3292e09a-4c13-4f22-bd19-c96fb8c02d35 | available | icmhost1-vol8  | 300  | None        | false    |             || 3a198cb1-e5a1-401f-9f48-7f0f704d1d89 | available | icmhost1-vol9  | 300  | None        | false    |             || 675b2c51-e4ed-4b7e-9b58-f122a080998d | available | icmhost1-vol4  | 300  | None        | false    |             || 68a19e4e-c152-4b44-9039-e2c2a2eb637a | available | icmhost2-vol7  | 300  | None        | false    |             || 6b033e5c-5b43-48cc-9f2c-2f9f9ce923df | available | icmhost2-vol6  | 300  | None        | false    |             || 71f3d676-c443-4b7c-8a67-be83dbeed652 | available | icmhost2-vol9  | 300  | None        | false    |             || 7e6561fa-bc4f-4378-b630-0ff18a71dbf3 | available | icmhost1-vol3  | 300  | None        | false    |             || 7fdeb349-5452-4f63-84f1-57af8e870e16 | available | icmhost1-vol5  | 300  | None        | false    |             || 813b62ad-3549-47d4-a7e7-27402c30408f | available | icmhost1-vol10 | 300  | None        | false    |             || 91ebd346-312b-42a0-866d-5154a0ac1f29 | available | icmhost2-vol4  | 300  | None        | false    |             || b30a6f9b-a76a-4fb2-b979-82c74352f938 | available | icmhost2-vol1  | 300  | None        | false    |             || b647aa56-ddf8-42bb-ad66-0f1a62541b3f | available | icmhost2-vol10 | 300  | None        | false    |             || cd673599-5d08-4e7a-ae90-c5b7d5e0ac24 | available | icmhost1-vol2  | 300  | None        | false    |             || e0df0f74-595b-4c2e-9915-cd2eb0204f85 | available | icmhost1-vol1  | 300  | None        | false    |             || e570651c-fc32-4ef4-ab96-82a647197ff2 | available | icmhost2-vol2  | 300  | None        | false    |             || eda9cd40-9a8a-47bc-8bbc-7309970c846a | available | icmhost1-vol6  | 300  | None        | false    |             |+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+

Step-5. Boot a compute instance and attach cinder volume

[root@icmnode1 cinder]# nova boot --flavor 2 --image 2c8bf73f-2fd9-472f-8c0d-6b67a5fffe71 --block-device source=volume,id=e0df0f74-595b-4c2e-9915-cd2eb0204f85,dest=volume,shutdown=preserve --nic net-id=43b6c73b-f977-49ac-ad71-a7af6b2f05e6 --availability-zone icmhost1 hadoop1

step-5

0 0
原创粉丝点击