from nova to ironic(6)

来源:互联网 发布:js金沙.com 编辑:程序博客网 时间:2024/05/18 13:09

在ironic的配置文档里,我们需要配置nova,这样可以provision一个physical的HW就像provision一个虚机一样,nova中做ironic的配置为:

[default]compute_driver=nova.virt.ironic.IronicDriverfirewall_driver=nova.virt.firewall.NoopFirewallDriverscheduler_host_manager=nova.scheduler.ironic_host_manager.IronicHostManagerram_allocation_ratio=1.0reserved_host_memory_mb=0compute_manager=ironic.nova.compute.manager.ClusteredComputeManager

firewall_driver并非ironic独有的,ram_allocation_ratio和reserved_host_memory_mb只是作为filter中的参数,还有其他的可以配置,至于IronicDriver用来实际provision物理

机,IronicHostManager用来调度available的物理机,ClusteredComputeManager可以看一下源码基本上复用了nova中的ComputeManager, 参照Ironic的文档,不同于普通

虚机的地方有:

1. 需要手动将node的信息输入DB(实际上如此,用户当然不能直接操作db),包括driver info,instance info, properties等,

2. ironic中network为Flat,且目前只支持Flat(https://software.intel.com/en-us/articles/physical-server-provisioning-with-openstack讲了原因,在neutron时候再说这个)

3. ironic配置中设置的flavor和target node 本身是一致的,即step1中传入的node的cpu,memory等和flavor定义的一致,我认为node传入的info <= 实际HW的真实参数即可

从conductor_manager开始:

hosts = self.scheduler_client.select_destinations(context, request_spec, filter_properties)

向上trace,看看request_spec和fileter_properties分别是什么, 一直trace到nova.compute.api中:

if image_href:
      image_id, boot_meta = self._get_image(context, image_href)
else:
      image_id = None
      boot_meta = self._get_bdm_image_metadata(context, block_device_mapping, legacy_bdm)

如果是image_href,则是glance image-show id的结果:

root@controller:~# glance image-show ad811205-d62a-43ab-bae0-28e4000edeb1
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 64d7c1cd2b6f60c92c14662941cb7913     |
| container_format | bare                                 |
| created_at       | 2014-07-17T13:51:33                  |
| deleted          | False                                |
| disk_format      | qcow2                                |
| id               | ad811205-d62a-43ab-bae0-28e4000edeb1 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | myos                                 |
| owner            | 70402632bd944179a2d44474d5b96fb7     |
| protected        | False                                |
| size             | 13167616                             |
| status           | active                               |
| updated_at       | 2014-07-17T13:51:33                  |
+------------------+--------------------------------------+

如果image_href is None, 则,boot_meta = self._get_bdm_image_metadata(context, block_device_mapping, legacy_bdm),其中block_device_mapping大致像:

"block_device_mapping_v2": [
            {
                "device_name": "/dev/sdb1",
                "source_type": "blank",
                "destination_type": "local",
                "delete_on_termination": "True",
                "guest_format": "swap",
                "boot_index": "-1"
            },
            {
                "device_name": "/dev/sda1",
                "source_type": "volume",
                "destination_type": "volume",
                "uuid": "fake-volume-id-1",
                "boot_index": "0"
            }
        ]

还有别的属性比如image id, volume id等,从中获取boot_meta,也是根据image_id从glance得到并添加一些属性


 filter_properties = self._build_filter_properties(context,scheduler_hints, forced_host,forced_node, instance_type,base_options.get('pci_requests'))

forced_host 和forced_node来自handle_az(context,availability_zone),由available_zone指定

base_options.get('pci_requests'),数据看起来像:

pci_requests = [{'count':2, 'specs': [{'vendor_id':'8086','device_id':'1502'}], 'alias_name': 'alias_1'}]

pci  device 所需信息。。。

而filter_properties就是上面这些信息得一个综合的dict,包括scheduler_hints, forced_host, forced_node, instance_type, pci_requests


还有一个instances此处也记下,经常出现

instances = self._provision_instances(context, instance_type,min_count, max_count, base_options, boot_meta, security_groups,
                block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota)

要启动n个instance,则根据在DB中initial这些instance并填充objects.Instance属性:

fields = {
        'id': fields.IntegerField(),

        'user_id': fields.StringField(nullable=True),
        'project_id': fields.StringField(nullable=True),

        'image_ref': fields.StringField(nullable=True),
        'kernel_id': fields.StringField(nullable=True),
        'ramdisk_id': fields.StringField(nullable=True),
        'hostname': fields.StringField(nullable=True),
       ......去nova.objects.Instance.py中查看,属性很多,也可以自己添加
        }

然后我们看select_destinations方法,三个参数context,request_spec, filter_properties

request_spec = scheduler_utils.build_request_spec(context, image,  instances)#将image和instance信息放在一起, filter_properties前面讲过, 通过scheduler的rpc

call到scheduler manager:

selected_hosts = self._schedule(context, request_spec,filter_properties),这里使用了host_manager即前面提到的nova.scheduler.ironic_host_manager.IronicHostManager

1.hosts= self.host_manager.get_all_host_states(context)

mysql进入openstack的database,使用tee命令将输出重定向到文件,便于查看输出(表很长,取一部分):

+-------+-----------+----------+------------+----------------+---------------+-----------------+

| vcpus | memory_mb | local_gb | vcpus_used | memory_mb_used | local_gb_used | hypervisor_type |

+-------+-----------+----------+------------+----------------+---------------+-----------------+

|    32 |    128744 |       49 |          0 |            512 |             0 | QEMU            |

|     0 |         0 |        0 |          0 |              0 |             0 | ironic          |
|     0 |         0 |        0 |          0 |              0 |             0 | ironic          |
。。。。

在IronicHostManager中重载了方法:host_state_cls

if compute and compute.get('cpu_info') == 'baremetal cpu':
            return IronicNodeState(host, node, **kwargs)

调用重载的update_from_compute_node方法,得到cpu,disk,hypervisor,states({例如"cpu_arch": "i686", "boot_mode": "uefi"})的信息赋予hosts

然后根据hosts信息和前面提到的filter_properties过滤,剩下的节点随机选择一个

hosts = self.host_manager.get_filtered_hosts(hosts,
                    filter_properties, index=num)

weighed_hosts = self.host_manager.get_weighed_hosts(hosts,
                    filter_properties)

hosen_host = random.choice(
                weighed_hosts[0:scheduler_host_subset_size])

根据配置的Filter或者默认的Conf中指定的filter进行过滤,各个不同的filter等涉及的也比较多


0 0