Learn Some Framework-4 Binder And ServerManager

来源:互联网 发布:linux 设置语言为英文 编辑:程序博客网 时间:2024/06/16 01:27

Learn Some Framework-4 Native Binder And ServerManager

Native Service

在之前的章节我们介绍了Android开机到HOME的过程,在介绍其他内容之前,我们有必要先了解一下Android的IPC机制。

其实对于每一个开发者,当被问及四大组件时,一定知道Service;而如果问及Android独创的IPC机制,大家也会脱口而出:Binder。

那么到底什么是Binder?它是如何运作的?它与传统的Linux IPC机制有差别么?

在回答这些问题之前,我们先介绍一个概念:Native Service,从Native Service我们来看在Native层Binder如何封装和工作,接下来回到JAVA时大家会发现,JAVA层的Binder不过是对Native Binder的一个概念封装而已。

在之前的章节中,我们介绍了Init进程如何启动Zygote,其实在叫起Zygote的同时,Init也会叫起在rc文件中注册的service:

service vold /system/bin/vold \        --blkid_context=u:r:blkid:s0 --blkid_untrusted_context=u:r:blkid_untrusted:s0 \        --fsck_context=u:r:fsck:s0 --fsck_untrusted_context=u:r:fsck_untrusted:s0    class core    socket vold stream 0660 root mount    socket cryptd stream 0660 root mount    ioprio be 2service netd /system/bin/netd    class main    socket netd stream 0660 root system    socket dnsproxyd stream 0660 root inet    socket mdns stream 0660 root system    socket fwmarkd stream 0660 root inetservice debuggerd /system/bin/debuggerd    class main    writepid /dev/cpuset/system-background/tasksservice debuggerd64 /system/bin/debuggerd64    class main    writepid /dev/cpuset/system-background/tasksservice ril-daemon /system/bin/rild    class main    socket rild stream 660 root radio    socket sap_uim_socket1 stream 660 bluetooth bluetooth    socket rild-debug stream 660 radio system    user root    group radio cache inet misc audio logservice surfaceflinger /system/bin/surfaceflinger    class core    user system    group graphics drmrpc    onrestart restart zygote    writepid /dev/cpuset/system-background/tasksservice drm /system/bin/drmserver    class main    user drm    group drm system inet drmrpcservice media /system/bin/mediaserver    class main    user media    group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc mediadrm

这些service最终的形态就是Linux的进程,在接下来的介绍中会慢慢体会到与JAVA的Service不同,这些service不在是一个“载体”的概念,他们就是实实在在存在的进程单位。

这些进程就是Native Service。

我们以surfaceflinger为例来看一下到底native service是什么:

class SurfaceFlinger : public BnSurfaceComposer,                       private IBinder::DeathRecipient,                       private HWComposer::EventHandler

请注意,它继承至一个叫BnSurfaceComposer的类,多观察几个service大家会发现,所有的Native Service都继承至Bn<INTERFACE>这种pattern的类,我们再看BnSurfaceComposer:

class BnSurfaceComposer: public BnInterface<ISurfaceComposer>
/* * This class defines the Binder IPC interface for accessing various * SurfaceFlinger features. */class ISurfaceComposer: public IInterface
请看注释,源自IInterface的ISurfaceComposer其实定义了一个IPC的接口,也即是如果将Binder看成是IPC的连接者,那么Binder两端的“代理”都应该需要去实现ISurfaceComposer的协议,从而完成消息的互传。

这些概念我们先放在这里,如果暂时不明白,可以通读全文,等到需要理解时再回头来看。


ServiceManager

既然有Native Service,那么就会有Service服务的对象,那么服务对象是如何找到Service的呢?

我们刚才有说,Native Service是进程的概念,对于其他进程要想知道自己需要的进程在哪里这个工程量就会非常庞大,如果一一查询,不仅效率低,如果遇上有上万个service的情况,可能查询就会要了命。

这个时候就好像打电话,A想要找到B,如果每个电话号码去尝试,这样打电话的便利也就失去了意义,于是我们有了查号台,A打电话去查号台,询问B的电话号码,然后拨号给B,这种思想也被引入到Android中来,扮演查号台角色的是一个叫做ServiceManager的进程,我们来看代码:

service servicemanager /system/bin/servicemanager    class core    user system    group system    critical    onrestart restart healthd    onrestart restart zygote    onrestart restart media    onrestart restart surfaceflinger    onrestart restart drm

这是servicemanager在rc文件中注册的行,不难发现,其实ServiceManager也是一个service,只是它比较特殊,怎么特殊?

我们看到刚才介绍的surfaceflinger,它是用c++写的,并且继承至父类,而servicemanager是c语言写的,也即是它只是一个service级别的进程,没有OOO的概念。

并且,它就是我们所说的Binder的核心,为什么?我们来看它的main函数

int main(int argc, char **argv){    struct binder_state *bs;    bs = binder_open(128*1024);    if (!bs) {        ALOGE("failed to open binder driver\n");        return -1;    }    if (binder_become_context_manager(bs)) {        ALOGE("cannot become context manager (%s)\n", strerror(errno));        return -1;    }    selinux_enabled = is_selinux_enabled();    sehandle = selinux_android_service_context_handle();    selinux_status_open(true);    if (selinux_enabled > 0) {        if (sehandle == NULL) {            ALOGE("SELinux: Failed to acquire sehandle. Aborting.\n");            abort();        }        if (getcon(&service_manager_context) != 0) {            ALOGE("SELinux: Failed to acquire service_manager context. Aborting.\n");            abort();        }    }    union selinux_callback cb;    cb.func_audit = audit_callback;    selinux_set_callback(SELINUX_CB_AUDIT, cb);    cb.func_log = selinux_log_callback;    selinux_set_callback(SELINUX_CB_LOG, cb);    binder_loop(bs, svcmgr_handler);    return 0;}

首先,是调用binder_open打开了一个binder_state的结构,紧接着通过binder_become_context_manager对binder对应设备的fd绑定,将binder fd作为context,在一系列的selinux操作后,servicemanager进入了正题:一个叫做binder_loop的循环,在这个循环里面,binder会不断尝试从fd里面读取数据,并回调svcmgr_handler这个方法处理。


我们先来看binder_open的实现:

struct binder_state *binder_open(size_t mapsize)   struct binder_state *bs;   struct binder_version vers;    bs = malloc(sizeof(*bs));    if (!bs) {        errno = ENOMEM;        return NULL;    }    bs->fd = open("/dev/binder", O_RDWR);    if (bs->fd < 0) {        fprintf(stderr,"binder: cannot open device (%s)\n",                strerror(errno));        goto fail_open;    }    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {        fprintf(stderr,                "binder: kernel driver version (%d) differs from user space version (%d)\n",                vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION);        goto fail_open;    }    bs->mapsize = mapsize;    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);    if (bs->mapped == MAP_FAILED) {        fprintf(stderr,"binder: cannot map device (%s)\n",                strerror(errno));        goto fail_map;    }    return bs;fail_map:    close(bs->fd);fail_open:    free(bs);    return NULL;}

这段代码很容易理解,首先初始化了bs这个变量,这个便是我们之前提到了binder_state这个结构的返回值,

接下来打开了/dev/binder这个设备作为binder的句柄,稍后这个句柄就会成为context manager,

接下来bs的mapsize被设置为入参,也即是128k,最重要的就是接下来的mmap,通过内存映射,为bs的句柄分配了128k的内存,于是稍后我们可以通过句柄来操作这块共享内存,之后便返回了bs,这样我们常说的Binder算是初始化完毕了,其实这样来看,Binder其实是在共享内存这种传统IPC的基础上衍生出来的。


接下来我们看一下svcmgr_handler都可以做些什么:

int svcmgr_handler(struct binder_state *bs,                   struct binder_transaction_data *txn,                   struct binder_io *msg,                   struct binder_io *reply){    struct svcinfo *si;    uint16_t *s;    size_t len;    uint32_t handle;    uint32_t strict_policy;    int allow_isolated;    //ALOGI("target=%p code=%d pid=%d uid=%d\n",    //      (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid);    if (txn->target.ptr != BINDER_SERVICE_MANAGER)        return -1;    if (txn->code == PING_TRANSACTION)        return 0;    // Equivalent to Parcel::enforceInterface(), reading the RPC    // header with the strict mode policy mask and the interface name.    // Note that we ignore the strict_policy and don't propagate it    // further (since we do no outbound RPCs anyway).    strict_policy = bio_get_uint32(msg);    s = bio_get_string16(msg, &len);    if (s == NULL) {        return -1;    }    if ((len != (sizeof(svcmgr_id) / 2)) ||        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {        fprintf(stderr,"invalid id %s\n", str8(s, len));        return -1;    }    if (sehandle && selinux_status_updated() > 0) {        struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();        if (tmp_sehandle) {            selabel_close(sehandle);            sehandle = tmp_sehandle;        }    }    switch(txn->code) {    case SVC_MGR_GET_SERVICE:    case SVC_MGR_CHECK_SERVICE:        s = bio_get_string16(msg, &len);        if (s == NULL) {            return -1;        }        handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);        if (!handle)            break;        bio_put_ref(reply, handle);        return 0;    case SVC_MGR_ADD_SERVICE:        s = bio_get_string16(msg, &len);        if (s == NULL) {            return -1;        }        handle = bio_get_ref(msg);        allow_isolated = bio_get_uint32(msg) ? 1 : 0;        if (do_add_service(bs, s, len, handle, txn->sender_euid,            allow_isolated, txn->sender_pid))            return -1;        break;    case SVC_MGR_LIST_SERVICES: {        uint32_t n = bio_get_uint32(msg);        if (!svc_can_list(txn->sender_pid)) {            ALOGE("list_service() uid=%d - PERMISSION DENIED\n",                    txn->sender_euid);            return -1;        }        si = svclist;        while ((n-- > 0) && si)            si = si->next;        if (si) {            bio_put_string16(reply, si->name);            return 0;        }        return -1;    }    default:        ALOGE("unknown code %d\n", txn->code);        return -1;    }    bio_put_uint32(reply, 0);    return 0;}

不难看出主要是处理四个事件:

enum {    /* Must match definitions in IBinder.h and IServiceManager.h */    PING_TRANSACTION  = B_PACK_CHARS('_','P','N','G'),    SVC_MGR_GET_SERVICE = 1,    SVC_MGR_CHECK_SERVICE,    SVC_MGR_ADD_SERVICE,    SVC_MGR_LIST_SERVICES,};

我们先记住这四个事件的值,稍后会用到。


最后,我们看一下binder_loop都干了什么:

void binder_loop(struct binder_state *bs, binder_handler func){    int res;    struct binder_write_read bwr;    uint32_t readbuf[32];    bwr.write_size = 0;    bwr.write_consumed = 0;    bwr.write_buffer = 0;    readbuf[0] = BC_ENTER_LOOPER;    binder_write(bs, readbuf, sizeof(uint32_t));    for (;;) {        bwr.read_size = sizeof(readbuf);        bwr.read_consumed = 0;        bwr.read_buffer = (uintptr_t) readbuf;        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);        if (res < 0) {            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));            break;        }        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);        if (res == 0) {            ALOGE("binder_loop: unexpected reply?!\n");            break;        }        if (res < 0) {            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));            break;        }    }

很简单,一个循环,不断从之前打开的句柄对应的内存空间读入,如果读到内容,便通过binder_parse方法解析并回调给注册的回调函数,在这个例子中便是svcmgr_handler.

这样一来我们就明白了,原来servicemanager就是binder的核心进程,它的实现就是开辟一块共享内存,并注册一个句柄,通过句柄不断读取内存的内容并处理相应的请求,这就是Binder的实质,很简单,一块共享内存,一端读,一端写。


Add Service

我们之前讲到过,ServiceManager扮演了查号台的角色,但是即使是查号台,也需要事先搜集Service的资讯才能知道有这样的Service,那么这个过程是怎么完成的呢?

其实Native Service在创建时,都需要向ServiceManager声明自己的存在,以surfaceflinger为例,我们查看它的启动函数

int main(int, char**) {    // When SF is launched in its own process, limit the number of    // binder threads to 4.    ProcessState::self()->setThreadPoolMaxThreadCount(4);    // start the thread pool    sp<ProcessState> ps(ProcessState::self());    ps->startThreadPool();    // instantiate surfaceflinger    sp<SurfaceFlinger> flinger = new SurfaceFlinger();    setpriority(PRIO_PROCESS, 0, PRIORITY_URGENT_DISPLAY);    set_sched_policy(0, SP_FOREGROUND);    // initialize before clients can connect    flinger->init();    // publish surface flinger    sp<IServiceManager> sm(defaultServiceManager());    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);    // run in this thread    flinger->run();    return 0;

首先生成了一个ProcessState的类,并将thread pool限定为4, 接着new了一个SurfaceFlinger对象,这个对象其实就是surfaceflinger这个service,我们刚才有介绍过,紧接着获取到了IServiceManager这个ServiceManager的代理,并发起了一次Binder通信,Transact是addService.

如果将Binder通信看成是一个类CS的模式,那么尽管是一个Service,在向ServiceManager注册时,它扮演的是Client的角色。

这样SurfaceFlinger通过defaultServiceManager()拿到了ServiceManager在这个进程的代理,也即是Binder的Client端,并直接调用addService,于是:

virtual status_t addService(const String16& name, const sp<IBinder>& service,        bool allowIsolated){    Parcel data, reply;    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());    data.writeString16(name);    data.writeStrongBinder(service);    data.writeInt32(allowIsolated ? 1 : 0);    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);    return err == NO_ERROR ? reply.readExceptionCode() : err;}}}

IServiceManager向getInterfaceDescriptor()获取到的句柄内写入了Parcel.

而此时,在另一端的ServiceManager会读取到这个Binder的变化,于是

case SVC_MGR_ADD_SERVICE:    s = bio_get_string16(msg, &len);    if (s == NULL) {        return -1;    }    handle = bio_get_ref(msg);    allow_isolated = bio_get_uint32(msg) ? 1 : 0;    if (do_add_service(bs, s, len, handle, txn->sender_euid,        allow_isolated, txn->sender_pid))        return -1;    break;

在我们之前讲到的svcmgr_handler这个回调函数内,会调用到上述的代码片段,接下来:

int do_add_service(struct binder_state *bs,                   const uint16_t *s, size_t len,                   uint32_t handle, uid_t uid, int allow_isolated,                   pid_t spid){    struct svcinfo *si;    //ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,    //        allow_isolated ? "allow_isolated" : "!allow_isolated", uid);    if (!handle || (len == 0) || (len > 127))        return -1;    if (!svc_can_register(s, len, spid)) {        ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",             str8(s, len), handle, uid);        return -1;    }    si = find_svc(s, len);    if (si) {        if (si->handle) {            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",                 str8(s, len), handle, uid);            svcinfo_death(bs, si);        }        si->handle = handle;    } else {        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));        if (!si) {            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",                 str8(s, len), handle, uid);            return -1;        }        si->handle = handle;        si->len = len;        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));        si->name[len] = '\0';        si->death.func = (void*) svcinfo_death;        si->death.ptr = si;        si->allow_isolated = allow_isolated;        si->next = svclist;        svclist = si;    }    binder_acquire(bs, handle);    binder_link_to_death(bs, handle, &si->death);    return 0;

如果存在这个service,就只是简单地重新设置,如果不存在,ServiceManager会给这个Service分配一个结构体,并添加至列表svclist,因此不难看出,svclist内记录了当前的所有Native Service。

这样我们就完成了Service的添加,查号台就有了Servie的句柄了。


BpRefBase vs BBinder, BnINTERFACE VS BpINTERFACE

通过之前的讲述,我们接触到四个类,它们是BpRefBase, BBInder, BnINTERFACE, BpINTERFACE,查看代码不难发现他们的继承关系如下:

可以看出,BnInterface继承至BBinder,而BpInterface继承至BpRefBase,简单来说,BBinder是Service的核心,也即是真正的Native Service就是BBinder,而BpRefBase是Client拿到的Binder的另一端,随便查看一个Native Service的代码就可以发现,无论是BnInterface还是BpInterface都实现了自己的OnTransact,区别是什么呢?

其实BnInterface的OnTransact实现的是来自Client的请求,而BpInterface是Client用来向Service发送事件的,我们以SurfaceComposer为例:

这其中IInterface其实是Bn和Bp都需要实现的部分,也即是我们之前讲到的“协议”,在ISurfaceComposer中有两个InnerClass,不用说,一个是Bn一个是Bp, Bn用于处理Client过来的事件,而Bp则是可以通过SurfaceFlinger句柄获取到的用于向SurfaceComposer发送消息的Binder端。


How to Work.

熟悉了这个架构,我们继续以addService为例,刚才我们有意跳过了一段:即addService是怎么被ServiceManager读取到的呢?

还是回到代码:

    sp<IServiceManager> sm(defaultServiceManager());    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);
第一行拿到了IServiceManager的Bp, 我们看defaultServiceManager的实现:

sp<IServiceManager> defaultServiceManager(){    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;    {        AutoMutex _l(gDefaultServiceManagerLock);        while (gDefaultServiceManager == NULL) {            gDefaultServiceManager = interface_cast<IServiceManager>(                ProcessState::self()->getContextObject(NULL));            if (gDefaultServiceManager == NULL)                sleep(1);        }    }    return gDefaultServiceManager;}

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/){    return getStrongProxyForHandle(0);} 

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle){    sp<IBinder> result;    AutoMutex _l(mLock);    handle_entry* e = lookupHandleLocked(handle);    if (e != NULL) {        // We need to create a new BpBinder if there isn't currently one, OR we        // are unable to acquire a weak reference on this current one.  See comment        // in getWeakProxyForHandle() for more info about this.        IBinder* b = e->binder;        if (b == NULL || !e->refs->attemptIncWeak(this)) {            if (handle == 0) {                // Special case for context manager...                // The context manager is the only object for which we create                // a BpBinder proxy without already holding a reference.                // Perform a dummy transaction to ensure the context manager                // is registered before we create the first local reference                // to it (which will occur when creating the BpBinder).                // If a local reference is created for the BpBinder when the                // context manager is not present, the driver will fail to                // provide a reference to the context manager, but the                // driver API does not return status.                //                // Note that this is not race-free if the context manager                // dies while this code runs.                //                // TODO: add a driver API to wait for context manager, or                // stop special casing handle 0 for context manager and add                // a driver API to get a handle to the context manager with                // proper reference counting.                Parcel data;                status_t status = IPCThreadState::self()->transact(                        0, IBinder::PING_TRANSACTION, data, NULL, 0);                if (status == DEAD_OBJECT)                   return NULL;            }            b = new BpBinder(handle);            e->binder = b;            if (b) e->refs = b->getWeakRefs();

我们看见,其实getContextObject(NULL)返回了一个BpBinder(0), 再查看interface_cast的定义:

inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj){    return INTERFACE::asInterface(obj);}            result = b;}

所以这一下就变成了BpServiceManager(BpBinder(0)), 于是这就是我们的sm, 接下来:

    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);
就进入了BpServiceManager的addService方法:

virtual status_t addService(const String16& name, const sp<IBinder>& service,        bool allowIsolated){    Parcel data, reply;    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());    data.writeString16(name);    data.writeStrongBinder(service);    data.writeInt32(allowIsolated ? 1 : 0);    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);    return err == NO_ERROR ? reply.readExceptionCode() : err;}

我们看到,所有的数据准备好了以后,扔给了remote()->transact来执行,那么remote()又是什么?

我们刚才讲了,BpInterface继承至BpRefBase, 因此我们可以很容易的在BpRefBase的构造函数内找到答案:

BpRefBase::BpRefBase(const sp<IBinder>& o)    : mRemote(o.get()), mRefs(NULL), mState(0){    extendObjectLifetime(OBJECT_LIFETIME_WEAK);    if (mRemote) {        mRemote->incStrong(this);           // Removed on first IncStrong().        mRefs = mRemote->createWeak(this);  // Held for our entire lifetime.    }}

inline  IBinder*        remote()                { return mRemote; }


对比BpServiceManager的构造函数:

class BpServiceManager : public BpInterface<IServiceManager>{public:    BpServiceManager(const sp<IBinder>& impl)        : BpInterface<IServiceManager>(impl)    {    }

不难看出,remote()返回的就是mRemote, 而mRemote便是我们在构造函数时传入的sp<IBinder>,对于这个例子,它就是BpBinder(0), 于是我们看一下BpBinder的transact:

status_t BpBinder::transact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    // Once a binder has died, it will never come back to life.    if (mAlive) {        status_t status = IPCThreadState::self()->transact(            mHandle, code, data, reply, flags);        if (status == DEAD_OBJECT) mAlive = 0;        return status;    }    return DEAD_OBJECT;}

这个时候,一个新的Class IPCThreadState上班了.


IPCThreadState

IPDThreadState其实是整个Binder通信中的实际劳动者, 它有2个Parcel成员:m In和mOut, mOut维护着需要发送的消息,而mIn维护着需要接受的消息,通过不断地查询发送和接受,它承担起了Binder通信的核心读写部分:

status_t IPCThreadState::transact(int32_t handle,                                  uint32_t code, const Parcel& data,                                  Parcel* reply, uint32_t flags){    status_t err = data.errorCheck();    flags |= TF_ACCEPT_FDS;    IF_LOG_TRANSACTIONS() {        TextOutput::Bundle _b(alog);        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "            << handle << " / code " << TypeCode(code) << ": "            << indent << data << dedent << endl;    }    if (err == NO_ERROR) {        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);    }    if (err != NO_ERROR) {        if (reply) reply->setError(err);        return (mLastError = err);    }    if ((flags & TF_ONE_WAY) == 0) {        #if 0        if (code == 4) { // relayout            ALOGI(">>>>>> CALLING transaction 4");        } else {            ALOGI(">>>>>> CALLING transaction %d", code);        }        #endif        if (reply) {            err = waitForResponse(reply);        } else {            Parcel fakeReply;            err = waitForResponse(&fakeReply);        }        #if 0        if (code == 4) { // relayout            ALOGI("<<<<<< RETURNING transaction 4");        } else {            ALOGI("<<<<<< RETURNING transaction %d", code);        }        #endif        IF_LOG_TRANSACTIONS() {            TextOutput::Bundle _b(alog);            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "                << handle << ": ";            if (reply) alog << indent << *reply << dedent << endl;            else alog << "(none requested)" << endl;        }    } else {        err = waitForResponse(NULL, NULL);    }    return err;}

很简单,调用writeTransactionData写入transaction之后进入waitForResponse等待,于是整个Binder的通信机制就很清晰了.


More

上述都是以surfaceflinger和servicemanager为主角介绍的,其实如之前所讲,surfaceflinger在向servicemanager注册时扮演的是client角色,当其他进程需要surfaceflinger支持时,他又扮演了server的角色,所以对于普通service和client的交互其实与surfaceflinger与servicemanager的交互类似,大家可以尝试通过代码来进行推断学习。


1 0
原创粉丝点击