【nova】nova resize,通过数据库修改flavor绕过迁移

来源:互联网 发布:端游发展史 知乎 编辑:程序博客网 时间:2024/05/29 08:59
Resize 的作用是调整 instance 的 vCPU、内存和磁盘资源。Instance 需要多少资源是定义在 flavor 中的,resize 操作是通过为 instance 选择新的 flavor 来调整资源的分配。

在使用nova resize去修改虚拟机的配置的时候,实际调用的是migrate的代码。

     为什么需要在resize的时候进行迁移?要知道迁移过程涉及到底层网络、磁盘迁移、调度算法等,这个过程是相当复杂的 。
     我认为之所以会进行迁移,面对的场景一般是用户增加配置(disk等),这个时候出于OpenStack的一个调度策略,会进行nova-schedule来选择计算节点中最能够满足的一台机器进行resize(所以也有可能调度到本机)

在默认情况下,执行resize的时候,首先会进行磁盘disk镜像和后端镜像进行一个合并成raw,然后转换成qcow2的过程。而且会进行迁移的动作。整个过程会相当的消耗时间,而我们一般只需要修改cpu和mem的大小,并不需要去合并镜像,更不需要去做迁移,所以有时觉得迁移是不需要的。

resize与migrate底层接口一致,假若前端传入了新的flavor,则是resize,该新的flavor传入底层。

     迁移传入底层的flavor则为自身实例相同的flavor。底层根据传入进来的flavor参数走相同的逻辑。resize与migrate的区别是在迁移的同时,是否改变虚拟机的flavor。

     resize底层与冷迁移共用接口,因此迁移过程中会关闭实例。(注:在confirm_resize过程则不会关闭实例)实质是:当resize完成,新虚拟机已经迁移完毕,且能对外提供服务。 可以通过以下的源代码跟踪对应的流程

@wrap_check_policy@check_instance_lock@check_instance_cell@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED])def resize(self, context, instance, flavor_id=None, clean_shutdown=True,           **extra_instance_updates):    """Resize (ie, migrate) a running instance.    If flavor_id is None, the process is considered a migration, keeping    the original flavor_id. If flavor_id is not None, the instance should    be migrated to a new host and resized to the new flavor_id.    """    self._check_auto_disk_config(instance, **extra_instance_updates)    current_instance_type = instance.get_flavor()    # If flavor_id is not provided, only migrate the instance.    #通过flavor_id获取虚拟机类型,加入为空,新类型为原来的flavor    if not flavor_id:        LOG.debug("flavor_id is None. Assuming migration.",                  instance=instance)        new_instance_type = current_instance_type    else:        new_instance_type = flavors.get_flavor_by_flavor_id(                flavor_id, read_deleted="no")        if (new_instance_type.get('root_gb') == 0 and            current_instance_type.get('root_gb') != 0):            reason = _('Resize to zero disk flavor is not allowed.')            raise exception.CannotResizeDisk(reason=reason)    if not new_instance_type:        raise exception.FlavorNotFound(flavor_id=flavor_id)    current_instance_type_name = current_instance_type['name']    new_instance_type_name = new_instance_type['name']    LOG.debug("Old instance type %(current_instance_type_name)s, "              " new instance type %(new_instance_type_name)s",              {'current_instance_type_name': current_instance_type_name,               'new_instance_type_name': new_instance_type_name},              instance=instance)    same_instance_type = (current_instance_type['id'] ==                          new_instance_type['id'])    # NOTE(sirp): We don't want to force a customer to change their flavor    # when Ops is migrating off of a failed host.    if not same_instance_type and new_instance_type.get('disabled'):        raise exception.FlavorNotFound(flavor_id=flavor_id)    if same_instance_type and flavor_id and self.cell_type != 'compute':        raise exception.CannotResizeToSameFlavor()    # ensure there is sufficient headroom for upsizes    if flavor_id:     #计算resize所需要的资源配额,主要统计vCpu和mem        deltas = self._upsize_quota_delta(context, new_instance_type,                                          current_instance_type)        try:            quotas = self._reserve_quota_delta(context, deltas, instance)        except exception.OverQuota as exc:            quotas = exc.kwargs['quotas']            overs = exc.kwargs['overs']            usages = exc.kwargs['usages']            headroom = self._get_headroom(quotas, usages, deltas)            resource = overs[0]            used = quotas[resource] - headroom[resource]            total_allowed = used + headroom[resource]            overs = ','.join(overs)            LOG.warning(_LW("%(overs)s quota exceeded for %(pid)s,"                            " tried to resize instance."),                        {'overs': overs, 'pid': context.project_id})            raise exception.TooManyInstances(overs=overs,                                             req=deltas[resource],                                             used=used,                                             allowed=total_allowed,                                             resource=resource)    else:        quotas = objects.Quotas(context=context)    instance.task_state = task_states.RESIZE_PREP    instance.progress = 0    instance.update(extra_instance_updates)    instance.save(expected_task_state=[None])    filter_properties = {'ignore_hosts': []}        # 判断是否可以resize到本机    if not CONF.allow_resize_to_same_host:        filter_properties['ignore_hosts'].append(instance.host)    # Here when flavor_id is None, the process is considered as migrate.    if (not flavor_id and not CONF.allow_migrate_to_same_host):        filter_properties['ignore_hosts'].append(instance.host)    if self.cell_type == 'api':        # Commit reservations early and create migration record.        self._resize_cells_support(context, quotas, instance,                                   current_instance_type,                                   new_instance_type)    if not flavor_id:        self._record_action_start(context, instance,                                  instance_actions.MIGRATE)    else:        self._record_action_start(context, instance,                                  instance_actions.RESIZE)    scheduler_hint = {'filter_properties': filter_properties}    #调用nova-conductor    self.compute_task_api.resize_instance(context, instance,            extra_instance_updates, scheduler_hint=scheduler_hint,            flavor=new_instance_type,            reservations=quotas.reservations or [],            clean_shutdown=clean_shutdown)


为了使得VM在resize之后能够resize至本机,在/etc/nova/nova.conf配置文件中添加如下(所有控制节点和计算节点):
allow_resize_to_same_host=True
scheduler_default_filters=AllHostsFilter By
(changing the default nova scheduler filter to AllHostsFilter , all compute hosts will be available and unfiltered. It is not a good idea to keep this setting in a multi-compute environment)
配置好之后重启控制节点所有nova-*服务和计算节点nova-compute服务

     事实上,在reboot一个instance的时候,这个instance的libvirt.xml(只考虑kvm环境)会重新从数据库中生成。所以只需要修改数据库就行了。那么我们就会想能不能直接修改数据库然后使得resize生效呢?

答案是肯定的

首先分析数据库里面的nova表,直觉就是修改instances表,因为这个记录了所有instances的基本信息

update instances set  instance_type_id='3',  vcpus='4',memory_mb='8192',root_gb='40'   where hostname='cinder-cirros'  AND vm_state != 'deleted' AND vm_state != 'error';

     修改好之后在刷新Dashboard或者是执行nova --debug list,会发现完全没有反应,说明其实单单修改了instances表远远不够,通过观察,发现其实真正生效的数据库表在instance_extra表里面

flavor字段信息(json数据)
{  "new": null,  "old": null,  "cur": {    "nova_object.version": "1.1",    "nova_object.changes": [      "deleted",      "ephemeral_gb",      "updated_at",      "disabled",      "extra_specs",      "rxtx_factor",      "is_public",      "deleted_at",      "id",      "root_gb",      "name",      "flavorid",      "created_at",      "memory_mb",      "vcpus",      "swap",      "vcpu_weight"    ],    "nova_object.name": "Flavor",    "nova_object.data": {      "disabled": false,      "root_gb": 1,      "name": "m1.tiny",      "flavorid": "1",      "deleted": false,      "created_at": null,      "ephemeral_gb": 0,      "updated_at": null,      "memory_mb": 512,      "vcpus": 1,      "extra_specs": {},      "swap": 0,      "rxtx_factor": 1.0,      "is_public": true,      "deleted_at": null,      "vcpu_weight": 0,      "id": 2    },    "nova_object.namespace": "nova"  }}


     在这个json数据中修改nova_object.data中的root_gb、name、flavorid、memory_mb、vcpus、id(对应flavor表中的主键),比如:

     在修改了对应的数据之后,同时修改instances表中对应的flavor信息相关字段(确保一致性),然后执行nova-list或者刷新Dashboard会看到信息都修改了,这时候进入虚拟机通过  
     cat /proc/meminfo  
会看到虚拟机的实际配置是没有改变的,这时候需要执行
     nova --hard reboot instance
来使得libvirt.xml文件通过元数据注入重新生效,OK!修改成功

     在这里还需要测试这样做之后是否对整个系统的信息是否有损害,比如这样更改之后对于用户的资源用量统计信息是否会同步更新,因此这种做法还是值得商榷与测试的。但是这样的确是可以达到修改VM配置的目的
0 0