from nova to ironic(5)

来源:互联网 发布:udp源端口号 编辑:程序博客网 时间:2024/05/18 13:07

前面说到rest进来如何route,创建server的rest: /v2/{tenant_id}/server, POST 方法, 来看看对应在nova.api.openstack.compute.servers.py中的create方法:


1. parse传进来的数据,数据是否合法,是否包含name,server_admin的password是否传入,没有就生成一个, 查看是否有imageRef设置,如果没有,看看是否有

block_device mapping的设置(即boot from volume), check config_drive,extension manager是否load了“os-config-drive”, parse文件注入的内容,parse security_group,parse networks and ipv4, ipv6, 得到flavor id, user_data和availability_zone, 是否check quota等等。。。总之check了pass in的数据;

2. 调用self.compute_api.create(http://blog.csdn.net/qiuhan0314/article/details/42919143,这个系列还比较通俗)

CELL_TYPE_TO_CLS_NAME = {'api': 'nova.compute.cells_api.ComputeCellsAPI',
                         'compute': 'nova.compute.api.API',
                         None: 'nova.compute.api.API',
                        }
def _get_compute_api_class_name():
    """Returns the name of compute API class."""
    cell_type = nova.cells.opts.get_cell_type()
    return CELL_TYPE_TO_CLS_NAME[cell_type]



def API(*args, **kwargs):#nova.compute.API()
    class_name = _get_compute_api_class_name()
    return importutils.import_object(class_name, *args, **kwargs)

默认情况下,cells是不开启的,因此get_cell_type()返回None,也就是执行nova.comput.api.API,若开启cells,则根据cell type的不同,调用不同的API

 cfg.BoolOpt('enable',
                default=False,
                help='Enable cell functionality'),

3.在创建Server的Controller时有:   _view_builder_class = views_servers.ViewBuilder

对应的用在step 2之后“server = self._view_builder.create(req, instances[0])”

调用_add_location(robj)  , 添加访问当前创建的server的url

4.执行nova.compute.api.API(),有decorator:@hooks.add_hook("create_instance"),不过查看setup.cfg和pkg_resources,没有看到create_instance 的entry_point,

难道还没加吗,先提个疑问。

5. check policy,检查target总action是否允许,可以查看/etc/nova/policy.json就会明白,再检查是否多个instance用同一个fixip或者port,
调用self._create_instance,再次对输入参数进行处理,设置默认值或者parse:

_handle_availability_zone

_validate_and_build_base_options#各种check

_provision_instances#做了三件事_check_num_instances_quota, create_db_entry_for_new_instance,send_update_with_states,通过名字即很容易看出用途

将数据库中状态标为create之后

self.compute_task_api.build_instances#即conductor.ComputeTaskAPI()的build_instance方法,到这里终于从compute-> conductor


6.到了conductor.ComputeTaskAPI后,由于use_local默认是False,因此调用nova.conductor.api.ComputTaskAPI

def ComputeTaskAPI(*args, **kwargs):
    use_local = kwargs.pop('use_local', False)
    if oslo.config.cfg.CONF.conductor.use_local or use_local:
        api = conductor_api.LocalComputeTaskAPI
    else:
        api = conductor_api.ComputeTaskAPI
    return api(*args, **kwargs)

7.nova.conductor.api.ComputeTaskAPI仅仅做了一个forward调用nova.conductor.rpcapi.ComputeTaskAPI,然后将参数和方法rpc.cast出去:

cctxt = self.client.prepare(version=version)
cctxt.cast(context, 'build_instances', **kw)

其中client 连接的target为target = messaging.Target(topic=CONF.conductor.topic, version='2.0'), topic为conductor,namespace为“compute_task”联想到之前讲过compute service启动过程,这里

必然似乎conductor service启动,rpc server端target也是conductor,调用conductor.manager中的nova.conductor.manager.ConductorManager在__init__中:

self.compute_task_mgr = ComputeTaskManager()#trace it
self.cells_rpcapi = cells_rpcapi.CellsAPI()
self.additional_endpoints.append(self.compute_task_mgr)#接收到rpc call也使用addtional_endpoints中的方法处理

8.因此走到nova.conductor.manager.ComputeTaskManager, 在build_instance中:

 hosts = self.scheduler_client.select_destinations(context,request_spec, filter_properties), 我拉的是最新的code,之前看的好像不是这么玩儿的,我们来走一走这个

scheduler_client.select_destinations:

有scheduler_client=nova.scheduler.SchedulerClient, 在SchedulerClient的__initi__中:

 def __init__(self):
        self.queryclient = LazyLoader(importutils.import_class(
            'nova.scheduler.client.query.SchedulerQueryClient'))
        self.reportclient = LazyLoader(importutils.import_class(
            'nova.scheduler.client.report.SchedulerReportClient'))

函数select_destinations调用queryclient.select_destinations, 这个时候就转换到scheduler中了,执行nova.scheduler.rpcapi.SchedulerAPI.select_desitinations()

发送RPC call 被nova.scheduler中的Manager接收处理(涉及到nova-schduler的启动),在SchedulerManager中:

dests = self.driver.select_destinations(context, request_spec,filter_properties),这里的driver就是scheduler的driver, 默认的driver为:

 cfg.StrOpt('scheduler_driver',
               default='nova.scheduler.filter_scheduler.FilterScheduler',
               help='Default driver to use for the scheduler'),

这里可以尝试跟踪一下FilterScheduler的过程,以后会细说这一部分(FileterScheduler中会调用scheduler_host_manager,这个在ironic的配置文件中


9.scheduler调用结束后给出产生虚机的host dest, 之后调用self.compute_rpcapi.build_and_run_instance, 即:

nova.compute.rpcapi.ComputeAPI.build_and_run_instance,又回到了nova.compute中,这里仅仅做一个RPC调用:

rpcapi_opts = [
    cfg.StrOpt('compute_topic',
               default='compute',
               help='The topic compute nodes listen on'),
]

nova.compute.manager.ComputeManager.build_and_run_instance,最终要在这里产生虚机:

node = self.driver.get_available_nodes(refresh=True)[0] # 这里的driver就是compute_driver,前面讲ironic配置的时候说过的nova.virt.ironic.IronicDriver,或者默认

时候使用的nova.virt.libvirt.LibVirtDriver,对LibVirtDriver而言就是HyperVisor的HostName(可以trace into):


with self._build_resources(context, instance,
                        requested_networks, security_groups, image,
                        block_device_mapping) as resources:
                    instance.vm_state = vm_states.BUILDING
                    instance.task_state = task_states.SPAWNING
                    instance.save(expected_task_state=
                            task_states.BLOCK_DEVICE_MAPPING)
                    block_device_info = resources['block_device_info']
                    network_info = resources['network_info']
                    instance_type = None
                    if filter_properties is not None:
                        instance_type = filter_properties.get('instance_type')
                    self.driver.spawn(context, instance, image,
                                      injected_files, admin_password,
                                      network_info=network_info,
                                      block_device_info=block_device_info,
                                      instance_type=instance_type)

在LibVirtDriver的spawn中:

_create_image, _get_guest_xml, _create_domain_and_network, 这三个方法均涉及到底层的lib virt调用,都较复杂(以后再说),之后等待虚机启动,

最后在_build_and_run_instance中notify,create结束


不断的conductor,manager,api,scheduler切换,来总结一下梗概顺序:

compute api(controller) -> conductor api -> conductor rpc -> conductor manager -> scheduler rpc -> scheduler manager -> compute rpc -> compute manager


0 0