cinder-backup启动过程跟踪

来源:互联网 发布:淘宝上iphone se 编辑:程序博客网 时间:2024/06/06 18:44

== Based on Kilo ==

大致看了启动过程,很多细节还不清楚。又贴了很多代码,仅作为一个记录。

启动命令

devstack下的启动命令:

/usr/local/bin/cinder-backup --config-file /etc/cinder/cinder.conf

内容为:

#!/usr/bin/python# PBR Generated from u'console_scripts'import sysfrom cinder.cmd.backup import mainif __name__ == "__main__":    sys.exit(main())

就是执行cinder/cmd/backup.py中的main方法。
也很短:

"""Starter script for Cinder Volume Backup."""import sysimport warningswarnings.simplefilter('once', DeprecationWarning)import eventletfrom oslo_config import cfgfrom oslo_log import log as loggingeventlet.monkey_patch()from cinder import i18ni18n.enable_lazy()# Need to register global_optsfrom cinder.common import config  # noqafrom cinder import servicefrom cinder import utilsfrom cinder import versionCONF = cfg.CONFdef main():    CONF(sys.argv[1:], project='cinder',         version=version.version_string())    logging.setup(CONF, "cinder")    utils.monkey_patch()    server = service.Service.create(binary='cinder-backup')    service.serve(server)    service.wait()

主要就是这三行:

1. server = service.Service.create(binary='cinder-backup')2. service.serve(server)3. service.wait()

这里就是所谓的Service-Manager框架,有三篇很好的博客做了分析:hackerain的博客、bingotree的博客、sammyliu的博客。

下面我准备参考网上的资料自己走一遍这个流程。

  • 第1行调用Service的create方法创建了一个server。参数只指定了binary,事实上不止这么多,很多都是用CONF这个模块读取配置文件或者使用默认参数。这一块回头再看(关于host取值就在这里)[To-Do]
  • 第2行调用service的serve方法,来serve创建的server。
    作用是初始化rpc相关的信息,并放入eventlet协程中。到这一步为止,cinder-backup相关的exchange、queue、consumer都创建出来了,用rabbitmqctl命令可以看到。
  • 最终启动的service都是eventlet中的协程,第3行启动service(也就是rabbitmq的consumer)监听消息。

一个一个来看code。

启动流程

1. 创建server

from cinder import serviceserver = service.Service.create(binary='cinder-backup')

所谓的Service就是rabbitmq consumer,看其说明据说是listening to queues based on topic。

从Service对象开始研究,它定义了一些方法,如:start, create, kill, stop, wait, periodic_tasks, report_state, basic_config_check。

其中,create方法用@classmethod修饰,所以不用实例化就可以直接调用(上面就是这样用的),在create方法内部再实例化Service对象。

来看看这个create方法:

# cinder/service.py@classmethoddef create(cls, host=None, binary=None, topic=None, manager=None,           report_interval=None, periodic_interval=None,           periodic_fuzzy_delay=None, service_name=None):    """Instantiates class and passes back application object.    :param host: defaults to CONF.host    :param binary: defaults to basename of executable    :param topic: defaults to bin_name - 'cinder-' part    :param manager: defaults to CONF.<topic>_manager    :param report_interval: defaults to CONF.report_interval    :param periodic_interval: defaults to CONF.periodic_interval    :param periodic_fuzzy_delay: defaults to CONF.periodic_fuzzy_delay    """    if not host:        host = CONF.host    # 如果不指定,CONF.host会取主机hostname    if not binary:        binary = os.path.basename(inspect.stack()[-1][1])    if not topic:        topic = binary      # topic怎么理解?类似rabbitmq里面的exchange topic?思考:service本质上是consumer,最多会创建queue并指定exchange。Kombu中需要consumer也创建exchange,否则无法指定。    if not manager:        subtopic = topic.rpartition('cinder-')[2]          # 这里是'backup'        manager = CONF.get('%s_manager' % subtopic, None)  # 这里是'cinder.backup.manager.BackupManager'    # 以下3个参数不知道干嘛的,以后再研究    if report_interval is None:        report_interval = CONF.report_interval    if periodic_interval is None:        periodic_interval = CONF.periodic_interval    if periodic_fuzzy_delay is None:        periodic_fuzzy_delay = CONF.periodic_fuzzy_delay    # 调用Service的__init__创建Service对象    service_obj = cls(host, binary, topic, manager,                      report_interval=report_interval,                      periodic_interval=periodic_interval,                      periodic_fuzzy_delay=periodic_fuzzy_delay,                      service_name=service_name)    return service_obj

create方法的参数会指定在哪个host上启动处理哪个topic的service,启动之后真正干活(处理消息)的是那个manager(python类)。还有一些periodic task不是很清楚。

其中:

  • host - 可以在cinder.conf中指定;如果不指定,则取主机名。
    该host就是cinder service-list中看到的“Host”。后续rpcapi.py中的self.client.prepare中指定的host就是这个参数。
  • binary - 传入时已经指定,如cinder-backup, cinder-volume, cinder-scheduler
  • topic - 可以在cinder.conf中指定;如果不指定,则和binary同名
  • manager
    真正干活的类。这里的manager类是(cinder.conf未指定):

    ipdb> manager'cinder.backup.manager.BackupManager'ipdb> CONF.get("volume_manager")'cinder.volume.manager.VolumeManager'ipdb> CONF.get("back_manager")*** NoSuchOptError: no such option: back_manageripdb> CONF.get("backup_manager")'cinder.backup.manager.BackupManager'ipdb> CONF.get("api_manager")*** NoSuchOptError: no such option: api_manageripdb> CONF.get("scheduler_manager")'cinder.scheduler.manager.SchedulerManager'

最不好理解的是topic:


topic是oslo_messaging里面的topic,不是amqp里面的exchange类型。

oslo_messaging wiki:

a topic is a identifier for an RPC interface; servers listen for method invocations on a topic; clients invoke methods on a topic

从这个描述来看,topic就是一个标识(identifier)。在server端,topic标识queue <–> consumer的关系;在client端,topic标识publisher <–> exchange的关系。

Nova RPC文档中有个经典的图(里面还有几个UserCase,好好看看!):
enter image description here
从这个图中也可以看到,topic标识了message的整个通路。

To-do:代码层面,这个topic是怎么实现的?

  1. publisher,比如cinder-api,调用cinder/backup/rpcapi.py中的方法,构造msg。这个msg会指定发送到“openstack”这个exchange上(这一步不清楚,也有可能发送的default exchange上。最好能打印msg),其routing_key=cinder-backup.maqi-kilo(因为prepare方法指定了server=host)。调用的方法名称为“create_backup”。这一步中,“topic”就是“cinder-backup”
  2. “openstack”这个exchange是topic类型,他会分析routing_key。这里的routing_key没有通配符,那就完全匹配,匹配到叫做cinder-backup.maqi-kilo的queue上。
  3. “cinder-backup.maqi-kilo”这个queue的consumer也叫“cinder-backup.maqi-kilo”。这个consumer上暴露了多个方法(也就是endpoints),其中一个就是“create_backup”。
  4. consumer “cinder-backup.maqi-kilo”接收消息并处理。

create方法中最终调用cls(…)来实例化并返回Service对象,其初始化方法如下:

# cinder/service.pyfrom cinder.objects import base as objects_basefrom cinder.openstack.common import loopingcallfrom cinder.openstack.common import servicefrom cinder import rpcfrom cinder import versionclass Service(service.Service):    """Service object for binaries running on hosts.    A service takes a manager and enables rpc by listening to queues based    on topic. It also periodically runs tasks on the manager and reports    it state to the database services table.    """    def __init__(self, host, binary, topic, manager, report_interval=None,                 periodic_interval=None, periodic_fuzzy_delay=None,                 service_name=None, *args, **kwargs):        super(Service, self).__init__()        # 初始化rpc        # 主要根据配置得到TRANSPORT、serializer、NOTIFIER        if not rpc.initialized():            rpc.init(CONF)        self.host = host              # 默认为主机名        self.binary = binary          # cinder-backup        self.topic = topic            # 默认等于binary,为cinder-backup        self.manager_class_name = manager        manager_class = importutils.import_class(self.manager_class_name)   # 动态地import manager类        manager_class = profiler.trace_cls("rpc")(manager_class)           # osprofile相关        self.manager = manager_class(host=self.host,                                     service_name=service_name,                                     *args, **kwargs)        self.report_interval = report_interval        self.periodic_interval = periodic_interval        self.periodic_fuzzy_delay = periodic_fuzzy_delay        self.basic_config_check()          # Perform basic config checks before starting service        self.saved_args, self.saved_kwargs = args, kwargs        self.timers = []        setup_profiler(binary, host)

所做的主要工作是:

  • 初始化rpc:
    根据配置得到TRANSPORT(’rabbit’, ‘qpid’, ‘zmq’)、serializer、NOTIFIER。这些都是oslo_messaging里面的概念。transport可以理解为用哪种mq。

  • 实例化manager类

主要就是这几行:

self.manager_class_name = manager       # 'cinder.backup.manager.BackupManager'manager_class = importutils.import_class(self.manager_class_name)manager_class = profiler.trace_cls("rpc")(manager_class)self.manager = manager_class(host=self.host,                             service_name=service_name,                             *args, **kwargs)# ipdb> manager_class# <class 'cinder.backup.manager.BackupManager'># ipdb> self.host# 'maqi-kilo'# ipdb> service_name# ipdb> args# self = <cinder.service.Service object at 0x7faed8d38ad0># host = maqi-kilo# binary = cinder-backup# topic = cinder-backup# manager = cinder.backup.manager.BackupManager# report_interval = 10# periodic_interval = 60# periodic_fuzzy_delay = 60# service_name = None# args = ()# kwargs = {}# ipdb> kwargs# {}

到这里先理一下思路:
这一部分是要创建一个Service对象(也就是cinder-backup)。这个Service对象就是正在干活的consumer。Openstack里面把真正干活的类叫做manager,所以这里有self.manager = manager_class(....)

实例化manager_class

# cinder/backup/manager.py# 继承自SchedulerDependentManager表示backup service需要向scheduler报告capability(why?)class BackupManager(manager.SchedulerDependentManager):    """Manages backup of block storage devices."""    RPC_API_VERSION = '1.0'    target = messaging.Target(version=RPC_API_VERSION)    def __init__(self, service_name=None, *args, **kwargs):        # ipdb> type(self)        # <class 'cinder.backup.manager.BackupManager'>        # self为啥有这么多attribute?是CONF的作用吗?        # ipdb> self.        # self.RPC_API_VERSION              self.export_record                self.run_periodic_tasks        # self.add_periodic_task            self.import_record                self.service_config        # self.create_backup                self.init_host                    self.service_version        # self.create_instance_backup       self.init_host_with_rpc           self.target        # self.delete_backup                self.periodic_tasks               self.update_service_capabilities        # self.driver                       self.reset_status        # self.driver_name                  self.restore_backup        # ipdb> self.driver_name            # 从哪儿读的driver_name?配置文件吗?        # 'cinder.backup.drivers.ceph'        # ipdb> type(self.service_config)        # <type 'instancemethod'>        # ipdb> self.target        # <Target version=1.0>        # ipdb> type(self.target)        # <class 'oslo_messaging.target.Target'>        self.service = importutils.import_module(self.driver_name)        # ipdb> self.service        #<module 'cinder.backup.drivers.ceph' from '/home/openstack/workspace/cinder/cinder/backup/drivers/ceph.pyc'>        self.az = CONF.storage_availability_zone        self.volume_managers = {}        self._setup_volume_drivers()        self.backup_rpcapi = backup_rpcapi.BackupAPI()        # ipdb> type(self.backup_rpcapi)        # <class 'cinder.backup.rpcapi.BackupAPI'>        super(BackupManager, self).__init__(service_name='backup',                                            *args, **kwargs)

To-do:BackupManager本身是没有那么多attributes的,哪儿来的?

主要工作:

  1. import backup driver module,赋值给self.service
  2. 设置volume drivers(目的是backup时读取volume数据吗?)
  3. 取得rpcapi(目的是处理rpc.call发送的msg之后,可以发送response回去吗?)

看看后面两个:

  1. self._setup_volume_drivers()

    # cinder/backup/manager.pydef _setup_volume_drivers(self):if CONF.enabled_backends:    for backend in CONF.enabled_backends:         host = "%s@%s" % (CONF.host, backend)                    # 'hostname@enabled_backends',符合cinder-volume定义的host结构        mgr = importutils.import_object(CONF.volume_manager,     # import_object的功能:Import a class and return an instance of it                                        host=host,                                        service_name=backend)        config = mgr.configuration        backend_name = config.safe_get('volume_backend_name')        LOG.debug("Registering backend %(backend)s (host=%(host)s "                  "backend_name=%(backend_name)s).",                  {'backend': backend, 'host': host,                   'backend_name': backend_name})        self.volume_managers[backend] = mgrelse:    default = importutils.import_object(CONF.volume_manager)    LOG.debug("Registering default backend %s.", default)    self.volume_managers['default'] = default

    依次读取cinder.conf中的enabled_backends,每一个backend代表一个volume存储后端,也会有一个对应的cinder-volume service。然后调用import_object方法,实例化对应的volume_manager,最终放到self.volume_managers dict中。

    有了这些volume_managers之后,就能调用他们的方法了,比如creat_volume, create_snapshot, copy_volume_to_image。(是这个目的吗??是的,至少后面的init_host会调用detach_volume)

  2. self.backup_rpcapi = backup_rpcapi.BackupAPI()

    这个更重要。

    这个是初始化rpc client。(server端为何要rpc client??—> 因为rpc.call需要发送response给publisher??)

    # cinder/backup/rpcapi.pyimport oslo_messaging as messagingfrom cinder import rpcclass BackupAPI(object):"""Client side of the volume rpc API.API version history:    1.0 - Initial version."""BASE_RPC_API_VERSION = '1.0'def __init__(self):    super(BackupAPI, self).__init__()    target = messaging.Target(topic=CONF.backup_topic,                 # 这里的topic=cinder-backup                              version=self.BASE_RPC_API_VERSION)    # ipdb> target    # <Target topic=cinder-backup, version=1.0>    # ipdb> type(target)    # <class 'oslo_messaging.target.Target'>    self.client = rpc.get_client(target, '1.0')

    rpc.get_client方法就是初始化一个rpc client,看这篇blog的分析。

至此,创建server对象的流程走完了。
这个对象只是具有了一些属性方法,比如最重要的start,stop。后面两步会把这个对象放在eventlet中,并调用start方法启动服务。

来看看这个server的类型与方法:

# cinder/service.pyipdb> server<cinder.service.Service object at 0x7f4b5547fad0>ipdb> server.server.basic_config_check    server.manager_class_name    server.reset                 server.timersserver.binary                server.periodic_fuzzy_delay  server.saved_args            server.topicserver.create                server.periodic_interval     server.saved_kwargs          server.waitserver.host                  server.periodic_tasks        server.startserver.kill                  server.report_interval       server.stopserver.manager               server.report_state          server.tgipdb> server.host'maqi-kilo'ipdb> server.manager<cinder.backup.manager.BackupManager object at 0x7f4b547fa850>ipdb> server.tg<cinder.openstack.common.threadgroup.ThreadGroup object at 0x7f4b55488450>ipdb> server.topic'cinder-backup'

2. service.serve(server)

这是启动rpc consumer最关键的一步。
会创建exchanges(3个)、queues(3个)、consumer(3个)。

可以参考bingotree的blog。

执行:

# cinder/cmd/backup.pyservice.serve(server)

也就是:

# cinder/service.pyfrom cinder.openstack.common import servicedef serve(server, workers=None):    # ipdb> a    # server = <cinder.service.Service object at 0x7fb08b6b4550>    # workers = None    global _launcher    if _launcher:        raise RuntimeError(_('serve() can only be called once'))    _launcher = service.launch(server, workers=workers)

接着执行:

# cinder/openstack/common/service.pydef launch(service, workers=1):    if workers is None or workers == 1:        launcher = ServiceLauncher()        launcher.launch_service(service)    else:        launcher = ProcessLauncher()        launcher.launch_service(service, workers=workers)    return launcher

workers =None,看第一种情况,执行的是:

# cinder/openstack/common/service.pyclass Launcher(object):    """Launch one or more services and wait for them to complete."""    def __init__(self):        """Initialize the service launcher.        :returns: None        """        self.services = Services()        self.backdoor_port = eventlet_backdoor.initialize_if_enabled()

Services()和eventlet_backdoor都是eventlet里面的东东。
Services()相当于起了一个协程的group,并赋值给self.services:

# cinder/openstack/common/service.pyclass Services(object):    def __init__(self):        self.services = []        self.tg = threadgroup.ThreadGroup()        self.done = event.Event()    def add(self, service):                   # 下面就用到了        self.services.append(service)        self.tg.add_thread(self.run_service, service, self.done)

直接来看launcher.launch_service:

# cinder/openstack/common/service.pydef launch_service(self, service):    """Load and start the given service.    :param service: The service you would like to start.    :returns: None    """    service.backdoor_port = self.backdoor_port     # 看不懂    self.services.add(service)

这里实际上是调用上面的add方法,把service放到threadgroup中。
看看self.run_service:

# cinder/openstack/common/service.py@staticmethoddef run_service(service, done):    """Service start wrapper.    :param service: service to run    :param done: event to wait on until a shutdown is triggered    :returns: None    """    service.start()    done.wait()

终于看到start方法了。。。最重要的就是这个了。。。
又回到了Service对象:

# cinder/service.pydef start(self):    # ipdb> type(self)    # <class 'cinder.service.Service'>    version_string = version.version_string()    LOG.info(_LI('Starting %(topic)s node (version %(version_string)s)'),             {'topic': self.topic, 'version_string': version_string})    self.model_disconnected = False    # 主要是Cleaning up incomplete backup operations    self.manager.init_host()    ctxt = context.get_admin_context()    try:        # ipdb> self.host        # 'maqi-kilo'        # ipdb> self.binary        # 'cinder-backup'        # ipdb> self.topic        # 'cinder-backup'        # 根据host, binary从DB中取出记录        service_ref = db.service_get_by_args(ctxt,                                             self.host,                                             self.binary)        self.service_id = service_ref['id']    except exception.NotFound:        self._create_service_ref(ctxt)    LOG.debug("Creating RPC server for service %s", self.topic)    # 对consumer来说,target表示?    # 这里只指定了topic,server,没有指定exchange,why?    # 因为topic已经能标记msg了?    target = messaging.Target(topic=self.topic, server=self.host)    # ipdb> type(target)    # <class 'oslo_messaging.target.Target'>    # ipdb> target    # <Target topic=cinder-backup, server=maqi-kilo>    # ipdb> target.    # target.accepted_namespaces  target.fanout               target.server               target.version    # target.exchange             target.namespace            target.topic    # ipdb> target.exchange    # ipdb> target.fanout    # ipdb> target.namespace    # ipdb> target.accepted_namespaces    # [None]    # ipdb> self.manager    # <cinder.backup.manager.BackupManager object at 0x7f2206ace4d0>    endpoints = [self.manager]    # ipdb> self.manager.additional_endpoints    # []    endpoints.extend(self.manager.additional_endpoints)    serializer = objects_base.CinderObjectSerializer()    self.rpcserver = rpc.get_server(target, endpoints, serializer)    # ipdb> type(self.rpcserver)    # <class 'oslo_messaging.server.MessageHandlingServer'>    # ipdb> self.rpcserver.    # self.rpcserver.conf        self.rpcserver.executor    self.rpcserver.stop        self.rpcserver.wait    # self.rpcserver.dispatcher  self.rpcserver.start       self.rpcserver.transport    self.rpcserver.start()    # 向MQ broker注册,相当于Kombu中的consume()    # 2015-11-04 03:17:23.548 10319 DEBUG oslo_messaging._drivers.amqp [req-8cdbb598-8b0b-4709-a519-6020df7e6689 - - - - -] Pool creating new connection create /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:92    # 2015-11-04 03:17:23.556 10319 INFO oslo_messaging._drivers.impl_rabbit [req-8cdbb598-8b0b-4709-a519-6020df7e6689 - - - - -] Connecting to AMQP server on 10.133.16.195:5672    # 2015-11-04 03:17:23.570 10319 INFO oslo_messaging._drivers.impl_rabbit [req-8cdbb598-8b0b-4709-a519-6020df7e6689 - - - - -] Connected to AMQP server on 10.133.16.195:5672    # 至此,cinder-backup的exchange,queue都有了,consumer还没有:(为啥3个queue?)    # admin@maqi-kilo:~|⇒  sudo rabbitmqctl list_exchanges | grep backup    # cinder-backup_fanout    fanout    # admin@maqi-kilo:~|⇒  sudo rabbitmqctl list_queues | grep backup    # cinder-backup   0    # cinder-backup.maqi-kilo 0    # cinder-backup_fanout_37a694ff3d4045e087496756f7aa6ad5   0    # consumer一直到done.wait()之后才出现    # admin@maqi-kilo:~|⇒  sudo rabbitmqctl list_consumers | grep openstack    # admin@maqi-kilo:~|⇒  sudo rabbitmqctl list_consumers | grep backup    # admin@maqi-kilo:~|⇒    # done.wait()之后    # admin@maqi-kilo:~|⇒  sudo rabbitmqctl list_consumers | grep backup    # cinder-backup <'rabbit@maqi-kilo'.3.8036.0>   1   true    []    # cinder-backup.maqi-kilo   <'rabbit@maqi-kilo'.3.8036.0>   2   true    []    # cinder-backup_fanout_f9fac489bb344a03a5b20f47bdc4dc47 <'rabbit@maqi-kilo'.3.8036.0>   3   true    []    self.manager.init_host_with_rpc()    if self.report_interval:        # loop谁?        pulse = loopingcall.FixedIntervalLoopingCall(            self.report_state)        pulse.start(interval=self.report_interval,                    initial_delay=self.report_interval)        self.timers.append(pulse)    if self.periodic_interval:        if self.periodic_fuzzy_delay:            initial_delay = random.randint(0, self.periodic_fuzzy_delay)        else:            initial_delay = None        # loop谁?        periodic = loopingcall.FixedIntervalLoopingCall(            self.periodic_tasks)        periodic.start(interval=self.periodic_interval,                       initial_delay=initial_delay)        self.timers.append(periodic)

主要是:

  1. 创建target(指定topic、server)
  2. 创建endpoint,也就是rpc client可以调用的方法的manager对象
  3. 创建rpcserver并start:
    self.rpcserver = rpc.get_server(target, endpoints, serializer)
    self.rpcserver.start()
  4. loopingcall(不知道干啥的)

至此(rcpserver start之后),现象是:cinder-backup的exchange(3个),queue(3个)都有了,consumer还没有。

Exchange name Exchange type openstack topic cinder-backup_fanout fanout 空 direct

topic类型的exchange名称是可以配置的,在cinder.conf中:
control_exchange = your_favourite_name

Queue name cinder-backup cinder-backup.maqi-kilo cinder-backup_fanout_xxxx

bindings:

Alt text

我猜测:

  • name为空的exchange不会用到,想象不到openstack中哪种场景下会不指定exchange。name为空是default exchange,类型为direct。
  • fanout exchange用的也很少(?)
  • 主要是topic exchange

target,endpoint,topic都是oslo_messaging中很重要的概念:


Target:

  • 对client而言,它表示msg要发到哪里
  • 对server而言,它表示server要收取什么样的msg

    # /usr/local/lib/python2.7/dist-packages/oslo_messaging/target.pyclass Target(object):"""Identifies the destination of messages.A Target encapsulates all the information to identify where a messageshould be sent or what messages a server is listening for.Different subsets of the information encapsulated in a Target object isrelevant to various aspects of the API:  creating a server:      # 创建consumer时,topic和server是必须的    topic and server is required; exchange is optional  an endpoint's target:    namespace and version is optional  client sending a message:     # client发送msg时,topic是必须的,exchange也不要吗?    topic is required, all other attributes optionalIts attributes are::param exchange: A scope for topics. Leave unspecified to default to the  control_exchange configuration option.:type exchange: str:param topic: A name which identifies the set of interfaces exposed by a  server. Multiple servers may listen on a topic and messages will be  dispatched to one of the servers in a round-robin fashion.:type topic: str:param namespace: Identifies a particular interface (i.e. set of methods)  exposed by a server. The default interface has no namespace identifier  and is referred to as the null namespace.:type namespace: str:param version: Interfaces have a major.minor version number associated  with them. A minor number increment indicates a backwards compatible  change and an incompatible change is indicated by a major number bump.  Servers may implement multiple major versions and clients may require  indicate that their message requires a particular minimum minor version.:type version: str:param server: Clients can request that a message be directed to a specific  server, rather than just one of a pool of servers listening on the topic.:type server: str:param fanout: Clients may request that a message be directed to all  servers listening on a topic by setting fanout to ``True``, rather than  just one of them.:type fanout: bool:param legacy_namespaces: A server always accepts messages specified via  the 'namespace' parameter, and may also accept messages defined via  this parameter. This option should be used to switch namespaces safely  during rolling upgrades.:type legacy_namespaces: list of strings"""def __init__(self, exchange=None, topic=None, namespace=None,             version=None, server=None, fanout=None,             legacy_namespaces=None):    self.exchange = exchange    self.topic = topic    self.namespace = namespace    self.version = version    self.server = server    self.fanout = fanout    self.accepted_namespaces = [namespace] + (legacy_namespaces or [])

Endpoint:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py"""An RPC server exposes a number of endpoints, each of which contain a set ofmethods which may be invoked remotely by clients over a given transport.To create an RPC server, you supply a transport, target and a list ofendpoints."""

了解一下基本概念之后来看rpc.get_server:

# cinder/rpc.pydef get_server(target, endpoints, serializer=None):    # ipdb> a    # target = <Target topic=cinder-backup, server=maqi-kilo>    # endpoints = [<cinder.backup.manager.BackupManager object at 0x7ff98bad34d0>]    # serializer = <cinder.objects.base.CinderObjectSerializer object at 0x7ff97f096510>    # ipdb> TRANSPORT    # <oslo_messaging.transport.Transport object at 0x7ff98cb91b50>    assert TRANSPORT is not None    serializer = RequestContextSerializer(serializer)    return messaging.get_rpc_server(TRANSPORT,                                    target,                                    endpoints,                                    executor='eventlet',                                    serializer=serializer)

messaging.get_rpc_server真正开始创建rabbitmq上的东东。里面有个dispatcher和executor的概念。dispatcher就是一个理解message格式的东西,executor?

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.pyfrom oslo_messaging.rpc import dispatcher as rpc_dispatcherfrom oslo_messaging import server as msg_serverdef get_rpc_server(transport, target, endpoints,                   executor='blocking', serializer=None):    """Construct an RPC server.    The executor parameter controls how incoming messages will be received and    dispatched. By default, the most simple executor is used - the blocking    executor.    If the eventlet executor is used, the threading and time library need to be    monkeypatched.    :param transport: the messaging transport    :type transport: Transport    :param target: the exchange, topic and server to listen on    :type target: Target    :param endpoints: a list of endpoint objects    :type endpoints: list    :param executor: name of a message executor - for example                     'eventlet', 'blocking'    :type executor: str    :param serializer: an optional entity serializer    :type serializer: Serializer    """    dispatcher = rpc_dispatcher.RPCDispatcher(target, endpoints, serializer)    return msg_server.MessageHandlingServer(transport, dispatcher, executor)

从名字上看,先创建dispatcher,再创建MessageHandlingServer。
dispatcher分析incoming message中的target,endpoints,serializer。
MessageHandlingServer顾名思义是处理message的。他的字符串参数executor表示用什么方式启动server,目前都是“eventlet”。

oslo_messaging wiki:

The idea here is that the server is implemented using two internal concepts - dispatchers and executors. The dispatcher looks at the incoming message payload and invokes the appropriate method. The executor represents the strategy for polling the transport for incoming messages and passing them to the dispatcher. These two abstractions allow us to use the same server implementation with multiple dispatchers (e.g. for rpc and notifications) and multiple executors (e.g. blocking and eventlet).

What’s particularly important here is that we’re not encoding a dependency on eventlet in the transport drivers, leaving us room to switch to something else in the future.

继续贴注释:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.pyclass RPCDispatcher(object):    """A message dispatcher which understands RPC messages.    A MessageHandlingServer is constructed by passing a callable dispatcher    which is invoked with context and message dictionaries each time a message    is received.    RPCDispatcher is one such dispatcher which understands the format of RPC    messages. The dispatcher looks at the namespace, version and method values    in the message and matches those against a list of available endpoints.    Endpoints may have a target attribute describing the namespace and version    of the methods exposed by that object. All public methods on an endpoint    object are remotely invokable by clients.    """    def __init__(self, target, endpoints, serializer):        """Construct a rpc server dispatcher.        :param target: the exchange, topic and server to listen on        :type target: Target        """        self.endpoints = endpoints        self.serializer = serializer or msg_serializer.NoOpSerializer()        self._default_target = msg_target.Target()        self._target = target    def _listen(self, transport):        return transport._listen(self._target)
# /usr/local/lib/python2.7/dist-packages/oslo_messaging/server.pyclass MessageHandlingServer(object):    """Server for handling messages.    Connect a transport to a dispatcher that knows how to process the    message using an executor that knows how the app wants to create    new tasks.    """    def __init__(self, transport, dispatcher, executor='blocking'):        """Construct a message handling server.        The dispatcher parameter is a callable which is invoked with context        and message dictionaries each time a message is received.        The executor parameter controls how incoming messages will be received        and dispatched. By default, the most simple executor is used - the        blocking executor.        :param transport: the messaging transport        :type transport: Transport        :param dispatcher: a callable which is invoked for each method        :type dispatcher: callable        :param executor: name of message executor - for example                         'eventlet', 'blocking'        :type executor: str        """        self.conf = transport.conf        self.transport = transport        self.dispatcher = dispatcher        self.executor = executor        try:            mgr = driver.DriverManager('oslo.messaging.executors',                                       self.executor)        except RuntimeError as ex:            raise ExecutorLoadFailure(self.executor, ex)        else:            self._executor_cls = mgr.driver            self._executor = None        super(MessageHandlingServer, self).__init__()    def start(self):        """Start handling incoming messages.        This method causes the server to begin polling the transport for        incoming messages and passing them to the dispatcher. Message        processing will continue until the stop() method is called.        The executor controls how the server integrates with the applications        I/O handling strategy - it may choose to poll for messages in a new        process, thread or co-operatively scheduled coroutine or simply by        registering a callback with an event loop. Similarly, the executor may        choose to dispatch messages in a new thread, coroutine or simply the        current thread.        """        # ipdb> self.        # self.conf        self.dispatcher  self.executor    self.start       self.stop        self.transport   self.wait        # ipdb> self.dispatcher        # <oslo_messaging.rpc.dispatcher.RPCDispatcher object at 0x7fee71cc3f50>        # ipdb> self.dispatcher.        # self.dispatcher.endpoints   self.dispatcher.serializer        # ipdb> self.dispatcher.endpoints        # [<cinder.backup.manager.BackupManager object at 0x7fee74fe9590>]        # ipdb> self.executor        # 'eventlet'        # ipdb> self.transport        # <oslo_messaging.transport.Transport object at 0x7fee760a6c10>        # ipdb> self.transport.        # self.transport.cleanup  self.transport.conf        if self._executor is not None:            return        try:            listener = self.dispatcher._listen(self.transport)            # ipdb> listener            # <oslo_messaging._drivers.amqpdriver.AMQPListener object at 0x7fee68901090>        except driver_base.TransportDriverError as ex:            raise ServerListenError(self.target, ex)        self._executor = self._executor_cls(self.conf, listener,                                            self.dispatcher)        self._executor.start()

这里的start方法就是前面的self.rpcserver.start()。看其注释,是真正启动server并开始polling the transport for incoming messages and passing them to the dispatcher。

这里的listener很重要,是oslo_messaging._drivers.amqpdriver.AMQPListener对象。
创建的时候实际上是调用了transport._listen。不同的transport定义了各自的_listen方法,在该方法中会创建exchange、queue、consumer等。

创建listener之后,就是用evenlet启动这个listener了。

一路跟踪self.dispatcher._listen,发现最终调用的是:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.pydef listen(self, target):    # ipdb> a    # self = <oslo_messaging._drivers.impl_rabbit.RabbitDriver object at 0x7f00a26bb150>    # target = <Target topic=cinder-backup, server=maqi-kilo>    conn = self._get_connection(rpc_amqp.PURPOSE_LISTEN)    # ipdb> conn    # <oslo_messaging._drivers.amqp.ConnectionContext object at 0x7f4494712f90>    listener = AMQPListener(self, conn)    # ipdb> listener    # <oslo_messaging._drivers.amqpdriver.AMQPListener object at 0x7f0094c5b090>    # 在此之前,exchange可能不存在,也可能存在,因为cinder服务共用这个topic exchange    # ipdb> self._get_exchange(target)    # 'openstack'    conn.declare_topic_consumer(exchange_name=self._get_exchange(target),                                topic=target.topic,                                callback=listener)    conn.declare_topic_consumer(exchange_name=self._get_exchange(target),                                topic='%s.%s' % (target.topic,                                                 target.server),                                callback=listener)    conn.declare_fanout_consumer(target.topic, listener)    return listener

可以看到,他调了两次conn.declare_topic_consumer,
一次conn.declare_fanout_consumer,猜测一共创建3个exchange、3个queue、3个consumer。

其中,exchange_name是由self._get_exchange(target)取得的,定义如下:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.pyclass AMQPDriverBase(base.BaseDriver):    def __init__(self, conf, url, connection_pool,                 default_exchange=None, allowed_remote_exmods=None):        super(AMQPDriverBase, self).__init__(conf, url, default_exchange,                                             allowed_remote_exmods)        self._default_exchange = default_exchange        self._connection_pool = connection_pool        self._reply_q_lock = threading.Lock()        self._reply_q = None        self._reply_q_conn = None        self._waiter = None    def _get_exchange(self, target):        return target.exchange or self._default_exchange

在我们这个环境里,“target”没有指定exchange,所以使用了_default_exchange,叫做”openstack”。这个topic类型的“openstack” exchange是由cinder各个服务共享的。

我搜索了一下代码,发现是在这里定义的,
同时,可以在cinder.conf中用control_exchange指定名称:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py_transport_opts = [    cfg.StrOpt('transport_url',               help='A URL representing the messaging driver to use and its '                    'full configuration. If not set, we fall back to the '                    'rpc_backend option and driver specific configuration.'),    cfg.StrOpt('rpc_backend',               default='rabbit',               help='The messaging driver to use, defaults to rabbit. Other '                    'drivers include qpid and zmq.'),    cfg.StrOpt('control_exchange',               default='openstack',          <======= 在这里~~               help='The default exchange under which topics are scoped. May '                    'be overridden by an exchange name specified in the '                    'transport_url option.'),]

declare_topic_consumer和declare_fanout_consumer在这里:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.pydef declare_topic_consumer(self, exchange_name, topic, callback=None,                           queue_name=None):    """Create a 'topic' consumer."""    self.declare_consumer(functools.partial(TopicConsumer,                                            name=queue_name,                                            exchange_name=exchange_name,                                            ),                          topic, callback)def declare_fanout_consumer(self, topic, callback):    """Create a 'fanout' consumer."""    self.declare_consumer(FanoutConsumer, topic, callback)

oslo_messaging中的impl_rabbit.py中实现了3种consumer类:DirectConsumer,TopicConsumer,FanoutConsumer,其__init__方法都调用了kombu.entity.Exchange()创建exchanges,再调用其基类ConsumerBase的reconnect方法去创建queues。

直接贴代码:

# /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.pyclass TopicConsumer(ConsumerBase):    """Consumer class for 'topic'."""    def __init__(self, conf, channel, topic, callback, tag, exchange_name,                 name=None, **kwargs):        """Init a 'topic' queue.        :param channel: the amqp channel to use        :param topic: the topic to listen on        :paramtype topic: str        :param callback: the callback to call when messages are received        :param tag: a unique ID for the consumer on the channel        :param exchange_name: the exchange name to use        :param name: optional queue name, defaults to topic        :paramtype name: str        Other kombu options may be passed as keyword arguments        """        # Default options        options = {'durable': conf.amqp_durable_queues,                   'queue_arguments': _get_queue_arguments(conf),                   'auto_delete': conf.amqp_auto_delete,                   'exclusive': False}        options.update(kwargs)        exchange = kombu.entity.Exchange(name=exchange_name,                                         type='topic',                                         durable=options['durable'],                                         auto_delete=options['auto_delete'])        super(TopicConsumer, self).__init__(channel,                                            callback,                                            tag,                                            name=name or topic,                                            exchange=exchange,                                            routing_key=topic,                                            **options)class ConsumerBase(object):    """Consumer base class."""    def __init__(self, channel, callback, tag, **kwargs):        """Declare a queue on an amqp channel.        'channel' is the amqp channel to use        'callback' is the callback to call when messages are received        'tag' is a unique ID for the consumer on the channel        queue name, exchange name, and other kombu options are        passed in here as a dictionary.        """        self.callback = callback        self.tag = six.text_type(tag)        self.kwargs = kwargs        self.queue = None        self.reconnect(channel)    def reconnect(self, channel):        """Re-declare the queue after a rabbit reconnect."""        self.channel = channel        self.kwargs['channel'] = channel        self.queue = kombu.entity.Queue(**self.kwargs)        try:            self.queue.declare()        except Exception as e:            # NOTE: This exception may be triggered by a race condition.            # Simply retrying will solve the error most of the time and            # should work well enough as a workaround until the race condition            # itself can be fixed.            # TODO(jrosenboom): In order to be able to match the Exception            # more specifically, we have to refactor ConsumerBase to use            # 'channel_errors' of the kombu connection object that            # has created the channel.            # See https://bugs.launchpad.net/neutron/+bug/1318721 for details.            LOG.error(_("Declaring queue failed with (%s), retrying"), e)            self.queue.declare()

这块不复杂,做个小测试:

from kombu.entity import Exchange, Queuefrom kombu.connection import Connectionconn = Connection(hostname='localhost', userid='guest', password='guest')ch = conn.channel()d = Exchange('my_default_exchange', channel=ch)              # 不指定type时,创建的就是direct exchanged.declare()q_default = Queue('queue_default', exchange=d, channel=ch, routing_key='routing_default')q_default.declare()f = Exchange('my_fanout_exchange', type='fanout', channel=ch)f.declare()q_fanout = Queue('queue_fanout', exchange=f, channel=ch, routing_key='routing_fanout')                     # fanout也可以指定routing_keyq_fanout.declare()f2 = Exchange('my_fanout_exchange_2', type='fanout', channel=ch)f2.declare()q_fanout = Queue('queue_fanout_2', exchange=f2, channel=ch)q_fanout.declare()

结果:

felix@ubuntu14-home:~/work/practise/kombu/my_test|⇒  sudo rabbitmqctl list_exchangesListing exchanges ...    direct                 # default exchange的名字为空,type=directamq.direct  directamq.fanout  fanoutamq.headers headersamq.match   headersamq.rabbitmq.trace  topicamq.topic   topicmy_default_exchange directmy_fanout_exchange  fanoutmy_fanout_exchange_2    fanout...done.felix@ubuntu14-home:~/work/practise/kombu/my_test|⇒  sudo rabbitmqctl list_queuesListing queues ...queue_default   0queue_fanout    0queue_fanout_2  0...done.felix@ubuntu14-home:~/work/practise/kombu/my_test|⇒  sudo rabbitmqctl list_bindings source_name  destination_name destination_kind routing_keyListing bindings ...    queue_default   queue   queue_default    queue_fanout    queue   queue_fanout    queue_fanout_2  queue   queue_fanout_2my_default_exchange queue_default   queue   routing_defaultmy_fanout_exchange  queue_fanout    queue   routing_fanoutmy_fanout_exchange_2    queue_fanout_2  queue...done.

default exchange(direct类型)会自动和每个queue建立一个binding,其routing_key就是queue name。所以前面那张cinder bindings的图上可以看到一个空的exchange和后面的每个queue都建立了binding。

之前还纠结了一个问题:fanout类型的exchange为何有routing_key?就像我这个例子一样,cinder-backup启动的cinder-backup_fanout_xxxx也有routing_key。
Rabbitmq Tutorials-4给出了说明:===> fanout exchange不会识别routing_key

Bindings can take an extra routing_key parameter. To avoid the confusion with a basic_publish parameter we’re going to call it a binding key. This is how we could create a binding with a key:

channel.queue_bind(exchange=exchange_name,                  queue=queue_name,                  routing_key='black')

The meaning of a binding key depends on the exchange type. The fanout exchanges, which we used previously, simply ignored its value.

3. service.wait()

执行:

# cinder/cmd/backup.pyservice.wait()
1 0
原创粉丝点击