Android Binder机制学习总结(三)-ServiceManager部分

来源:互联网 发布:java自定义方法格式 编辑:程序博客网 时间:2024/04/28 20:36
        接上篇的内容,分析下ServiceManager的实现。

        ServiceManager的实现位于:

        4.2:/frameworks/base/cmds/servicemanager/

        4.3:frameworks/native/cmds/servicemanager/

ServiceManager的启动

        ServiceManager的的启动由init进程根据init.rc文件的配置执行,从时间顺序上来说,ServiceManager的启动优先于Zygote进程
service servicemanager /system/bin/servicemanager    class core            //core类服务    user system           //用户名    group system          //用户组    critical              //重要service, 如果4分钟内crush4次以上,则重启系统并进入recovery    onrestart restart zygote          //servicemanager重启以后,自动重启zygote    onrestart restart media           //同上    onrestart restart surfaceflinger  //同上    onrestart restart drm             //同上
        ServiceManager是一个可执行文件,所以,我们从main函数看起(frameworks/base/cmds/servicemanager/servicemanager.c):
int main(int argc, char **argv){    struct binder_state *bs;    void *svcmgr = BINDER_SERVICE_MANAGER;    bs = binder_open(128*1024);    if (binder_become_context_manager(bs)) {        ALOGE("cannot become context manager (%s)\n", strerror(errno));        return -1;    }    svcmgr_handle = svcmgr;    binder_loop(bs, svcmgr_handler);//svcmgr_handle为具体的请求处理逻辑    return 0;}
        简单来说,ServiceManager的启动分为三个步骤:
  1. 打开dev/binder,并创建binder缓冲区
  2. 注册当前进程为上下文管理者(ServiceManager)
  3. 进入处理循环,等待Service/Client的请求

步骤一

        步骤一,由binder_open函数实现(frameworks/base/cmds/servicemanager/binder.c):
struct binder_state *binder_open(unsigned mapsize){    struct binder_state *bs;    bs = malloc(sizeof(*bs));    if (!bs) {        errno = ENOMEM;        return 0;    }    bs->fd = open("/dev/binder", O_RDWR);//上一节讲过,这里会转入内核态,执行binder_open,创建binder_proc    if (bs->fd < 0) {        fprintf(stderr,"binder: cannot open device (%s)\n",                strerror(errno));        goto fail_open;    }    bs->mapsize = mapsize;//mapsize = 128KB    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);//上一节讲过,这里会转入内核态,执行binder_mmap                                                                        //在内核态创建相同size的缓冲区,并分配第一个物理页面,计算内核缓冲区地址和用户缓冲区地址的偏移量    if (bs->mapped == MAP_FAILED) {        fprintf(stderr,"binder: cannot map device (%s)\n",                strerror(errno));        goto fail_map;    }        /* TODO: check version */    return bs;fail_map:    close(bs->fd);fail_open:    free(bs);    return 0;}
        如果上一节binder driver部分的内容有比较好的理解的话,这边的代码应该比较好理解的,顺便看看binder_state的实现:
struct binder_state{    int fd;    void *mapped;    unsigned mapsize;};

步骤二

        步骤二,由binder_become_context_manager函数实现:
int binder_become_context_manager(struct binder_state *bs){    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);}
        灰常简单的实现,有木有? 让我们来回忆一下,上一节的内容,ioctl的调用会转入到binder driver的binder_ioctl函数来处理BINDER_SET_CONTEXT_MGR:
        case BINDER_SET_CONTEXT_MGR:if (binder_context_mgr_node != NULL) {printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");ret = -EBUSY;goto err;}ret = security_binder_set_context_mgr(proc->tsk);if (ret < 0)goto err;if (binder_context_mgr_uid != -1) {if (binder_context_mgr_uid != current->cred->euid) {printk(KERN_ERR "binder: BINDER_SET_"       "CONTEXT_MGR bad uid %d != %d\n",       current->cred->euid,       binder_context_mgr_uid);ret = -EPERM;goto err;}} elsebinder_context_mgr_uid = current->cred->euid;binder_context_mgr_node = binder_new_node(proc, NULL, NULL);//binder_context_mgr_node->proc = servicemanagerif (binder_context_mgr_node == NULL) {ret = -ENOMEM;goto err;}binder_context_mgr_node->local_weak_refs++;binder_context_mgr_node->local_strong_refs++;binder_context_mgr_node->has_strong_ref = 1;binder_context_mgr_node->has_weak_ref = 1;break;
        忽略安全检查等代码,上面的代码就是设定了全局变量binder_context_mgr_node,并增加引用计数。

步骤三

        处理循环的实现在binder_loop函数中:
void binder_loop(struct binder_state *bs, binder_handler func){    int res;    struct binder_write_read bwr;    unsigned readbuf[32];    bwr.write_size = 0;    bwr.write_consumed = 0;    bwr.write_buffer = 0;        readbuf[0] = BC_ENTER_LOOPER;    binder_write(bs, readbuf, sizeof(unsigned));//binder driver会通过binder_thread_write函数处理BC_ENTER_LOOPER指令    for (;;) {        bwr.read_size = sizeof(readbuf);        bwr.read_consumed = 0;        bwr.read_buffer = (unsigned) readbuf;        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//读取client/service的请求        if (res < 0) {            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));            break;        }        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);//处理请求        if (res == 0) {            ALOGE("binder_loop: unexpected reply?!\n");            break;        }        if (res < 0) {            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));            break;        }    }}

ServiceManager客户端代理

        ServiceManager运行在自己的进程中,为了向Client/Service进程提供服务,ServiceManager为自己准备了客户端代理,方便Client/Service调用。

IServiceManager和BpServiceManager

        IServiceManager是ServiceManager在native层的接口(framework/native/include/binder/IServiceManager.h):
class IServiceManager : public IInterface{public:    DECLARE_META_INTERFACE(ServiceManager);    /**     * Retrieve an existing service, blocking for a few seconds     * if it doesn't yet exist.     */    virtual sp<IBinder>         getService( const String16& name) const = 0;    /**     * Retrieve an existing service, non-blocking.     */    virtual sp<IBinder>         checkService( const String16& name) const = 0;    /**     * Register a service.     */    virtual status_t            addService( const String16& name,                                            const sp<IBinder>& service,                                            bool allowIsolated = false) = 0;    /**     * Return list of all existing services.     */    virtual Vector<String16>    listServices() = 0;    enum {        GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,        CHECK_SERVICE_TRANSACTION,        ADD_SERVICE_TRANSACTION,        LIST_SERVICES_TRANSACTION,    };};
        从接口中,我们看到SeviceManager提供了4个功能:
  • getService,同checkService
  • checkService,供Client获取Service的binder
  • addService, 供Service注册binder
  • listService,用于枚举所有已经注册的binder
        而BpServiceManager是IServiceManager的一个子类,提供了IServiceManager的实现(frameworks/native/libs/binder/IServiceManager.cpp):
class BpServiceManager : public BpInterface<IServiceManager>{public:    BpServiceManager(const sp<IBinder>& impl)        : BpInterface<IServiceManager>(impl)    {    }    virtual sp<IBinder> getService(const String16& name) const    {          ...... //实现啥的,我们后面再看    }    virtual sp<IBinder> checkService( const String16& name) const    {          ......    }    virtual status_t addService(const String16& name, const sp<IBinder>& service,            bool allowIsolated)    {          ......    }    virtual Vector<String16> listServices()    {          ......    }};
        前缀Bp可以理解为Binder Proxy,即BpServiceManager实际上是ServiceManager在客户进程中的一个代理,所以BpServiceManager并不负责实现真正的功能,而是通过Binder通信发送请求到前面启动的ServiceManager进程。上一节中我们讲到过,Binder通信的前提是客户端进程需要有BpBinder,那么BpBinder从何而来呢?

defaultServiceManager

        作为一个特殊的“Service”,Android系统为ServiceManager准备了“快捷方式”,这个快捷方式就是defaultServiceManager(frameworks/native/libs/binder/IServiceManager.cpp):
sp<IServiceManager> defaultServiceManager(){    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;//单例模式        {        AutoMutex _l(gDefaultServiceManagerLock);        if (gDefaultServiceManager == NULL) {            gDefaultServiceManager = interface_cast<IServiceManager>(                ProcessState::self()->getContextObject(NULL));        }    }        return gDefaultServiceManager;}
        这里可以把defaultServiceManager分解为三个步骤:
  1. ProcessState::self()
  2. ProcessState->getContextObject(NULL)
  3. interface_cast<IServiceManager>()
         1.1 ProcessState::self()看起:
sp<ProcessState> ProcessState::self(){    Mutex::Autolock _l(gProcessMutex);//又是单例模式,因为是进程信息,所以,一个进程只能有一个实例    if (gProcess != NULL) {        return gProcess;    }    gProcess = new ProcessState;    return gProcess;}
        1.2 ProcessState的构造函数:
ProcessState::ProcessState()    : mDriverFD(open_driver())    , mVMStart(MAP_FAILED)    , mManagesContexts(false)    , mBinderContextCheckFunc(NULL)    , mBinderContextUserData(NULL)    , mThreadPoolStarted(false)    , mThreadPoolSeq(1){    if (mDriverFD >= 0) {        // XXX Ideally, there should be a specific define for whether we        // have mmap (or whether we could possibly have the kernel module        // availabla).#if !defined(HAVE_WIN32_IPC)        // mmap the binder, providing a chunk of virtual address space to receive transactions.        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);        if (mVMStart == MAP_FAILED) {            // *sigh*            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");            close(mDriverFD);            mDriverFD = -1;        }#else        mDriverFD = -1;#endif    }    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");}
        是不是觉得有点熟悉?又看到mmap了,但是mDriverFD是哪来的呢?上面很容易忽略的地方有个open_driver(),让我们来看看。
        1.3 open_driver()
static int open_driver(){    int fd = open("/dev/binder", O_RDWR);    if (fd >= 0) {        fcntl(fd, F_SETFD, FD_CLOEXEC);        int vers;        status_t result = ioctl(fd, BINDER_VERSION, &vers);        if (result == -1) {            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));            close(fd);            fd = -1;        }        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {            ALOGE("Binder driver protocol does not match user space protocol!");            close(fd);            fd = -1;        }        size_t maxThreads = 15;        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);        if (result == -1) {            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));        }    } else {        ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));    }    return fd;}
        现在,更加熟悉了:
    int fd = open("/dev/binder", O_RDWR);
        所以,ProcessState::self()做了两件事:
  • mDriverFD = open(“dev/binde")
  • mmap(mDriveFD)
        这就为调用binder通信做好了准备。然后再来看看ProcessState->getContextObject(NULL):
        2.1 getContextObject
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller){    return getStrongProxyForHandle(0);}
        2.2 getStrongProxyForHandle
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)//handle = 0{    sp<IBinder> result;    AutoMutex _l(mLock);    handle_entry* e = lookupHandleLocked(handle);    if (e != NULL) {        // We need to create a new BpBinder if there isn't currently one, OR we        // are unable to acquire a weak reference on this current one.  See comment        // in getWeakProxyForHandle() for more info about this.        IBinder* b = e->binder;        if (b == NULL || !e->refs->attemptIncWeak(this)) {            b = new BpBinder(handle);             e->binder = b;            if (b) e->refs = b->getWeakRefs();            result = b;        } else {            // This little bit of nastyness is to allow us to add a primary            // reference to the remote proxy when this team doesn't have one            // but another team is sending the handle to us.            result.force_set(b);            e->refs->decWeak(this);        }    }    return result;}
        2.3 lookupHandleLocked
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle) //handle = 0{    const size_t N=mHandleToObject.size();    if (N <= (size_t)handle) {        handle_entry e;        e.binder = NULL;        e.refs = NULL;        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);        if (err < NO_ERROR) return NULL;    }    return &mHandleToObject.editItemAt(handle);}
        那么mHandleToObject是什么(framworks/native/include/binder/PCState.h):
Vector<handle_entry>mHandleToObject;
       好吧,还得看看handle_entry是什么:
            struct handle_entry {                IBinder* binder;                RefBase::weakref_type* refs;            };
        所以,mHandleToObject是一个cache,如果handle参数对应的handle_entry已经存在,则返回handle_entry,否则先向mHandleToObject存入空的handle_entry项,再返回空的handle_entry项。
        再回到2.2:
        IBinder* b = e->binder;//所以,这里b可能为空        if (b == NULL || !e->refs->attemptIncWeak(this)) {//b也有可能已经被释放            //构造新的BpBinder            b = new BpBinder(handle); //注意:handle=0            e->binder = b; //给e赋值            if (b) e->refs = b->getWeakRefs();            result = b;        } else {            // This little bit of nastyness is to allow us to add a primary            // reference to the remote proxy when this team doesn't have one            // but another team is sending the handle to us.            result.force_set(b);            e->refs->decWeak(this);        }
        2.4 BpBinder构造函数
BpBinder::BpBinder(int32_t handle)    : mHandle(handle)    , mAlive(1)    , mObitsSent(0)    , mObituaries(NULL){    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);    extendObjectLifetime(OBJECT_LIFETIME_WEAK);    IPCThreadState::self()->incWeakHandle(handle);}
        虽然,很多次提到BpBinder,但还是第一次看到BpBinder的真容。BpBinder其实很简单,只要记得mHandle成员即可,它和内核态的binder_ref.desc相对应。不过,mHandle=0对BpBinder来说是个特例,因为它可以在内核态没有其对应binder_ref的情况下,进行binder通信。因为binder driver直接把所有mHandle=0的BpBinder直接关联到到全局binder_context_mgr_node,省略了binder_ref这个“中继器”。
        从结果来看,getContextObject返回了一个mHandle=0的BpBinder实例
        现在,再来看看interface_cast<IServiceManager>函数,从前面的分析,我们可以这样来理解输入参数:
interface_cast<IServiceManager>(new BpBinder(0))
        3.1 interface_cast宏(frameworks/native/include/binder/IInterface):
template<typename INTERFACE>inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj){    return INTERFACE::asInterface(obj);}
        这里的INTERFACE 为IServiceManager。
        3.2 IServiceManager::asInterface()的实现要从IMPLEMENT_META_INTERFACE说起:
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
        IMPLEMENT_META_INTERFACE的宏的声明(frameworks/native/include/binder/IInterface.h):
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \    const android::String16 I##INTERFACE::descriptor(NAME);             \    const android::String16&                                            \            I##INTERFACE::getInterfaceDescriptor() const {              \        return I##INTERFACE::descriptor;                                \    }                                                                   \    android::sp<I##INTERFACE> I##INTERFACE::asInterface(                \            const android::sp<android::IBinder>& obj)                   \    {                                                                   \        android::sp<I##INTERFACE> intr;                                 \        if (obj != NULL) {                                              \            intr = static_cast<I##INTERFACE*>(                          \                obj->queryLocalInterface(                               \                        I##INTERFACE::descriptor).get());               \            if (intr == NULL) {                                         \                intr = new Bp##INTERFACE(obj);                          \            }                                                           \        }                                                               \        return intr;                                                    \    }                                                                   \    I##INTERFACE::I##INTERFACE() { }                                    \    I##INTERFACE::~I##INTERFACE() { }                                   \
        代入android.os.IServiceManager,效果如下:
    const android::String16 IServiceManager::descriptor("android.os.IServiceManager");    const android::String16&                                                        IServiceManager::getInterfaceDescriptor() const {                      return IServiceManager::descriptor;                                    }                                                                       android::sp<IServiceManager> IServiceManager::asInterface(                            const android::sp<android::IBinder>& obj) // obj = new BpBinder(0);                       {                                                                           android::sp<IServiceManager> intr;                                         if (obj != NULL) {                                                          intr = static_cast<IServiceManager*>(                                          obj->queryLocalInterface(                                                       IServiceManager::descriptor).get());                           if (intr == NULL) {                                                         intr = new BpServiceManager(obj);                                      }                                                                   }                                                                       return intr;                                                        }                                                                       IServiceManager::IServiceManager() { }                                        IServiceManager::~IServiceManager() { }                                   
        3.3 BpBinder.queryLocalInterface()
        实际上,BpBinder未重载queryLocalInterface的实现,所以,实际执行的是其父类IBinder提供的实现:
sp<IInterface>  IBinder::queryLocalInterface(const String16& descriptor){    return NULL;}
        因为BpBinder其实是一个binder proxy,当然是没有local对象的。
        所以,回到3.2,我们可以知道
obj->queryLocalInterface(IServiceManager::descriptor).get()
        返回NULL,所以,intr = null,最终:
                intr = new BpServiceManager(obj);         
        然后,返回BpServiceManager实例。
       小结一下,Client和Service是如何获取ServiceManager的代理的过程:
  1. open "dev/binder",并执行mmap
  2. new BpBinder(0)
  3. new BpServiceManager(bpBinder)

ServiceManager的工作流程

        ServiceManager其实是个很“单纯”的家伙,从IServiceManager接口我们可以知道,它只提供了getSerivce、checkService、addService、listServices四个函数。

getService

        首先从BpServiceManager开始:
virtual sp<IBinder> getService(const String16& name) const    {        unsigned n;        for (n = 0; n < 5; n++){//只要5次checkService有一次成功,getService就成功            sp<IBinder> svc = checkService(name);            if (svc != NULL) return svc;            ALOGI("Waiting for service %s...\n", String8(name).string());            sleep(1);        }        return NULL;    }
         原来getSerivce就是对checkService的封装,为了提高checkService的成功率,而给了checkService最多5次的尝试机会。个人理解,Binder通信机制的可靠性还是比较可以接受的,所以,这里多次尝试checkSerivce的目的还是为了减少因Service未能及时启动并将自己注册到ServiceManager中而导致checkservice失败的概率。考虑到Android核心Service的启动时机,实际上第一次尝试checkService应该就会成功。

checkService

        checkService用于返回执行service name的BpHandler,如果指定的service尚未注册,则返回NULL。
        1. BpServiceManager.checkService的实现:
virtual sp<IBinder> checkService( const String16& name) const    {        Parcel data, reply;//因为篇幅的关系,Parcel类不做分析,简单来说,一个数据容器而已        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());        data.writeString16(name);//name为请求的service name,以MediaPlayer为例,name="media.player"        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);        return reply.readStrongBinder();    }
        这里的remote()函数继承自BpRefBase:
inline  IBinder*        remote()                { return mRemote; }
       而BpServiceManager的mRemote为BpBinder。
       2. BpBinder->transact
status_t BpBinder::transact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    // Once a binder has died, it will never come back to life.    if (mAlive) {        status_t status = IPCThreadState::self()->transact(            mHandle, code, data, reply, flags);//mHandle = 0; code = CHECK_SERVICE_TRANSACTION        if (status == DEAD_OBJECT) mAlive = 0;        return status;    }    return DEAD_OBJECT;}
        可以看到,BpBinder->transact的调用了IPCThreadState的同名函数。先来看看IPCThreadState::self()的实现。
        3. IPCThreadState::self()
IPCThreadState* IPCThreadState::self(){    if (gHaveTLS) {restart:        const pthread_key_t k = gTLS;        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);        if (st) return st;        return new IPCThreadState;    }        if (gShutdown) return NULL;        pthread_mutex_lock(&gTLSMutex);    if (!gHaveTLS) {        if (pthread_key_create(&gTLS, threadDestructor) != 0) {            pthread_mutex_unlock(&gTLSMutex);            return NULL;        }        gHaveTLS = true;    }    pthread_mutex_unlock(&gTLSMutex);    goto restart;}
        这个函数的实现涉及到TLS(Thread Local Storage),TLS简单来说,就是同一个变量在不同的线程中相互独立。具体到上面的代码来说,gTLS是同一个变量,但是函数
pthread_getspecific(k);//k = gTLS
       的返回值却随着调用线程的不同而不同。相同的还有pthread_setpecific函数。因为IPCThreadState是一个线程内唯一的变量,所以TLS机制很好的满足了代码的需求。相似的,java语言也通过Thread_local类提供了TLS的支持。不过C/C++语言的TLS机制是基于TCB(Thread Control Block)实现,而Java的TLS则是基于Thread类的localValues成员和inheritableValues成员实现。
       4. IPCThreadState的构造函数
IPCThreadState::IPCThreadState()    : mProcess(ProcessState::self()),//注意,mProcess的赋值      mMyThreadId(androidGetTid()),      mStrictModePolicy(0),      mLastTransactionBinderFlags(0){    pthread_setspecific(gTLS, this);//把自己保存到LTS中    clearCaller();    mIn.setDataCapacity(256);    mOut.setDataCapacity(256);}
     回到2,现在的代码可以理解为:
 status_t status = IPCThreadState->transact(mHandle, code, data, reply, flags);//mHandle = 0; code = CHECK_SERVICE_TRANSACTION
     5. IPCThreadState->transact
status_t IPCThreadState::transact(int32_t handle,                                  uint32_t code, const Parcel& data,                                  Parcel* reply, uint32_t flags){    status_t err = data.errorCheck();    flags |= TF_ACCEPT_FDS;//支持回复文件binder    ......        if (err == NO_ERROR) {        ......        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);    }        ......        if ((flags & TF_ONE_WAY) == 0) {        ......        if (reply) {            err = waitForResponse(reply);        } else {            Parcel fakeReply;            err = waitForResponse(&fakeReply);        }        ......                }    } else {        err = waitForResponse(NULL, NULL);//第二个NULL,表示无需read                                          //上一节中也讲到,binder driver被设计为不允许异步Binder回复请求    }        return err;}
        从函数名来看,writeTransactionData函数向Binder driver写入数据,而waitForResponse函数,负责等待并读取binder driver的返回。
       6. IPCThreadState->writeTransactionData
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer){    binder_transaction_data tr;//构造了binder_transaction_data接口体,准备和binder driver通信了    tr.target.handle = handle;//handle = 0    tr.code = code; //code = CHECK_SERVICE_TRANSACTION    tr.flags = binderFlags;    tr.cookie = 0;    tr.sender_pid = 0;//不必在意sender_pid和sender_euid,稍后binder driver会给他们正确的赋值    tr.sender_euid = 0;        const status_t err = data.errorCheck();    if (err == NO_ERROR) { //err = NO_ERROR        tr.data_size = data.ipcDataSize(); //数据的长度        tr.data.ptr.buffer = data.ipcData(); //数据的首地址        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);//偏移量数组的长度,字节为单位        tr.data.ptr.offsets = data.ipcObjects();//编译量数组的首地址    } else if (statusBuffer) {        tr.flags |= TF_STATUS_CODE;        *statusBuffer = err;        tr.data_size = sizeof(status_t);        tr.data.ptr.buffer = statusBuffer;        tr.offsets_size = 0;        tr.data.ptr.offsets = NULL;    } else {        return (mLastError = err);    }        mOut.writeInt32(cmd);//cmd = BC_TRANSACTION    mOut.write(&tr, sizeof(tr));//mOut是IPCThreadState的成员,Parcel类型        return NO_ERROR;}
        这里构造了一个binder_transaction_data结构体,并对其进行了赋值,这个操作是进行binder通信的必要准备,但是,这个binder_transaction_data居然没有被发送给binder driver而是存入了mOut?这个是什么情况?
        好吧,不管如何,返回到5继续看,writeTransactionData函数,返回后,接着调用了waitForResponse函数。
        7. IPCThreadState->waitForResponse
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){    int32_t cmd;    int32_t err;    while (1) {        if ((err=talkWithDriver()) < NO_ERROR) break;//这里面有大文章        ......        if (mIn.dataAvail() == 0) continue;                cmd = mIn.readInt32();                ......        switch (cmd) {        ......        case BR_REPLY:            {                binder_transaction_data tr;                err = mIn.read(&tr, sizeof(tr));                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");                if (err != NO_ERROR) goto finish;                if (reply) {                    if ((tr.flags & TF_STATUS_CODE) == 0) {                        reply->ipcSetDataReference(                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                            tr.data_size,                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                            tr.offsets_size/sizeof(size_t),                            freeBuffer, this);                    } else {                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);                        freeBuffer(NULL,                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                            tr.data_size,                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                            tr.offsets_size/sizeof(size_t), this);                    }                } else {                    freeBuffer(NULL,                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                        tr.data_size,                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                        tr.offsets_size/sizeof(size_t), this);                    continue;                }            }            goto finish;        ......        }    }finish:    ......        return err;}
        waitForResponse一进来就调用了函数talkWithDriver函数,那就先看看它的实现吧。
       8. IPCThreadState->talkWithDriver
status_t IPCThreadState::talkWithDriver(bool doReceive)//default,doReceive=true{    if (mProcess->mDriverFD <= 0) {        return -EBADF;    }        binder_write_read bwr;        // Is the read buffer empty?    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//输入缓冲无数据,所以需要从binder driver读取数据        // We don't want to write anything if we are still reading    // from data left in the input buffer and the caller    // has requested to read the next data.    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;        bwr.write_size = outAvail;    bwr.write_buffer = (long unsigned int)mOut.data();//记得么? writeTransactionData函数中,在mOut中保存了一个binder_transaction_data结构体    // This is what we'll read.    if (doReceive && needRead) {        bwr.read_size = mIn.dataCapacity();        bwr.read_buffer = (long unsigned int)mIn.data();    } else {        bwr.read_size = 0;        bwr.read_buffer = 0;    }    ......    bwr.write_consumed = 0;    bwr.read_consumed = 0;    status_t err;    do {        ......        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//终于看到了ioctl            err = NO_ERROR;        else            err = -errno;        ......    } while (err == -EINTR); //因为binder driver中是通过wait_event_interrupt函数进行等待,所以系统中断会导致binder_thread_read在读取到数据前返回                             //所以,当err=-EINTR时,需要再次调用ioctl读取数据    ......    if (err >= NO_ERROR) {//根据ioctl的执行结果,调整mIn和mOut的状态        if (bwr.write_consumed > 0) {            if (bwr.write_consumed < (ssize_t)mOut.dataSize())                mOut.remove(0, bwr.write_consumed);            else                mOut.setDataSize(0);        }        if (bwr.read_consumed > 0) {            mIn.setDataSize(bwr.read_consumed);            mIn.setDataPosition(0);        }        ......        return NO_ERROR;    }        return err;}
        talkWithDriver函数,像它的名字一样,真正完成了与binder driver的交互,实现了数据的发送和等待。关于
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
       可以回忆前面IPCThreadState::self()和ProcesssState::self()的实现,mProcess是当前进程的信息,而mDriverFD则是“dev/binder”的文件描述符。确定一下,当前bwr的状态:
bwr.write_size = mOut.data;//sizeof(writeTransactionData);bwr.write_buffer = mOut.data(); //binder_transaction_data的地址bwr.write_consumed = 0;bwr.read_size = mIn.dataCapacity(); //265bwr.read_buffer = mIn.data(); //empty bufferbwr.read_consumed=0;
       根据上一章的分析,进入内核态以后,binder driver会执行binder_ioctl函数,而在binder_ioctl函数执行的过程中,会先调用binder_thread_write函数发送数据到service manager,然后通过调用binder_thread_read等待service manager的回复。所以,我们暂时离开BpServiceManager,转到service manager,分析它如何接收并处理请求。
       不过,再离开BpServiceManager前,让我们再猜测一下为什么IPCThreadState把ioctl的调用留到了waitForResponse,而非writeTransactionData函数?
       试想,如果在writeTransactionData函数中,调用ioctl,会导致Client立即进入阻塞状态,目前的设计,可以让Client做自己想做的事,直到它真的需要Service对请求的处理结果时,在进入阻塞状态。这一点,和java类加载器用时加载的逻辑相似,有利于实行效率的提高。
       好吧,我想有人可能会想到“更好”的解决方案,把ioctl函数的调用分为两次,在writeTransactionData中调用ioctl函数,并置bwr.read_size = 0,发送请求给Service,然后,返回处理自己想做的事情,最后,当需要处理结果时,在waitForResponse中调用ioctl,并设置bwr.write_size为0。这样,即可以让Serivce尽早的获取请求,尽早开始处理请求,而Client也不必过早的等待Service的处理结果?
       如果不考虑ioctl函数的调用成本,上面的设计的确堪称“完美”。但是,不同与普通的函数调用,ioctl的调用会导致两次CPU状态切换(用户态到内核态,再从内核态切换回用户态),而每一次状态切换的成本高达数百个CPU周期,所以这个“完美“的设计,存在不小的性能”成本”,不能保证适用于所有场景。当Client对于Service的处理时间并不敏感,而且Client本身需要处理的事务较多的特定场合,这种设计是“有利可图”的。但是,上述的情境,用异步binder发送请求,再用同步Binder查询处理结果的设计也能达到相似的效果。
       现在,看看ServiceManager端的处理:
       9.binder_loop
void binder_loop(struct binder_state *bs, binder_handler func){    int res;    struct binder_write_read bwr;    unsigned readbuf[32];    bwr.write_size = 0;    bwr.write_consumed = 0;    bwr.write_buffer = 0;        readbuf[0] = BC_ENTER_LOOPER;    binder_write(bs, readbuf, sizeof(unsigned));    for (;;) {        bwr.read_size = sizeof(readbuf);        bwr.read_consumed = 0;        bwr.read_buffer = (unsigned) readbuf;        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//读取到请求以后,ioctl返回        ......        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);//回忆一下Service manager的启动,我们知道func = svcmgr_handler        ......    }}
        service manager进程通过直接直接调用ioctl函数来等待client的请求。因为我们刚才通过BpServiceManager类从Client进程发送请求,所以,我们知道ioctl函数返回时,bwr的状况应该是这样的:
binder_transaction_data tr;
tr.code = CHECK_SERVICE_TRANSACTION
bwr.read_consumed=sizeof(binder_transaction_data);bwr.read_buffer=readbuf=tr;
       再回忆一下,1. BpServiceManager.checkService的实现:
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());        data.writeString16(name);//name为请求的service name,以MediaPlayer为例,name="media.player"
      buffer中应该保存了“android.os.IServiceManager”和请求的service name(e.g. "media.player")
      10. binder_parse
int binder_parse(struct binder_state *bs, struct binder_io *bio,                 uint32_t *ptr, uint32_t size, binder_handler func){    int r = 1;    uint32_t *end = ptr + (size / 4);    while (ptr < end) {        uint32_t cmd = *ptr++;        ......        switch(cmd) {//cmd = BR_TRANSACTION        ......        case BR_TRANSACTION: {            struct binder_txn *txn = (void *) ptr;            ......            if (func) {                unsigned rdata[256/4];                struct binder_io msg;                struct binder_io reply;                int res;                bio_init(&reply, rdata, sizeof(rdata), 4);//因为rdata是一个空缓冲,所以从rdata初始化的reply目前也是空的,稍后将用于保存请求处理结果                bio_init_from_txn(&msg, txn);//txn=bwr.read_buffer,即请求数据,用请求数据初始化msg结构体                res = func(bs, txn, &msg, &reply);//func = svcmgr_handler                binder_send_reply(bs, &reply, txn->data, res);//回复请求处理结果            }            ptr += sizeof(*txn) / sizeof(uint32_t);            break;        }        ......        }    }    return r;}
    11. svcmgr_handler
int svcmgr_handler(struct binder_state *bs,                   struct binder_txn *txn,                   struct binder_io *msg,                   struct binder_io *reply){    struct svcinfo *si;    uint16_t *s;    unsigned len;    void *ptr;    uint32_t strict_policy;    int allow_isolated;    if (txn->target != svcmgr_handle)        return -1;    // Equivalent to Parcel::enforceInterface(), reading the RPC    // header with the strict mode policy mask and the interface name.    // Note that we ignore the strict_policy and don't propagate it    // further (since we do no outbound RPCs anyway).    strict_policy = bio_get_uint32(msg);//一般strict_policy = 0    s = bio_get_string16(msg, &len);//s=“android.os.IServiceManager”    if ((len != (sizeof(svcmgr_id) / 2)) ||        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {        fprintf(stderr,"invalid id %s\n", str8(s));        return -1;    }    switch(txn->code) {//code=SVC_MGR_CHECK_SERVICE    case SVC_MGR_GET_SERVICE:    case SVC_MGR_CHECK_SERVICE://SVC_MGR_GET_SERVICE和SVC_MGR_CHECK_SERVICE是等效的        s = bio_get_string16(msg, &len);//读取service name        ptr = do_find_service(bs, s, len, txn->sender_euid);//根据service name查找service        if (!ptr)            break;        bio_put_ref(reply, ptr);//ptr为handle, 把ptr保存到reply中        return 0;        ......    }    default:        ALOGE("unknown code %d\n", txn->code);        return -1;    }    bio_put_uint32(reply, 0);    return 0;}
        从代码中可以看到无论客户端的请求是什么,都需要先验证interface token:
 s = bio_get_string16(msg, &len);//s=“android.os.IServiceManager”    if ((len != (sizeof(svcmgr_id) / 2)) ||        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {        fprintf(stderr,"invalid id %s\n", str8(s));        return -1;    }
      我们知道Client端写入的interface token是“android.os.IServiceManager”:
    const android::String16 IServiceManager::descriptor("android.os.IServiceManager");    const android::String16&                                                        IServiceManager::getInterfaceDescriptor() const {                      return IServiceManager::descriptor;                                    }      
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
      这应该是为了防止错误的客户端代理调用调用ServiceManager的BpHandler(即mHandle=0的BpBinder)。
      然后,根据code执行对应的处理函数。这里,会执行do_find_service函数。
      12. do_find_service
void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len, unsigned uid){    struct svcinfo *si;    si = find_svc(s, len);    if (si && si->ptr) {        if (!si->allow_isolated) {            // If this service doesn't allow access from isolated processes,            // then check the uid to see if it is isolated.            unsigned appid = uid % AID_USER;//AID_USER = 100000            if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {//AID_ISOLATED_START=99000, AID_ISOLATED_END=99999                return 0;            }        }        return si->ptr;    } else {        return 0;    }}
        do_find_service函数通过调用find_svc函数,检索service name对应的service handle。另外,还需要涉及到请求进程的用户进程情况,如果呼叫进程是一个Service并且设置了android:isolatedProcess="true" ,就属于这种情况。
       13. find_svc
struct svcinfo *find_svc(uint16_t *s16, unsigned len){    struct svcinfo *si;    for (si = svclist; si; si = si->next) {//从svclist中搜索service        if ((len == si->len) &&            !memcmp(s16, si->name, len * sizeof(uint16_t))) {            return si;        }    }    return 0;}
        这个函数,没什么好解释的。
        返回到12,do_find_service函数返回si->ptr,即handle。
        再返回到11,svcmgr_handle函数,如果ptr有效,svcmgr_handle函数会调用bio_put_ref,把ptr保存到reply中,然后返回10 binder_parse。
        然后,binder_parse函数会调用binder_send_reply函数,回复请求处理结果
        14. binder_send_reply
void binder_send_reply(struct binder_state *bs,                       struct binder_io *reply,                       void *buffer_to_free,                       int status){    struct {        uint32_t cmd_free;        void *buffer;        uint32_t cmd_reply;        struct binder_txn txn;    } __attribute__((packed)) data;    data.cmd_free = BC_FREE_BUFFER; //首先发送BC_FREE_BUFFER,释放内核态缓冲区    data.buffer = buffer_to_free;    data.cmd_reply = BC_REPLY;//发送回复请求    data.txn.target = 0;//BC_REPLY指令会忽略target,cookie,code参数    data.txn.cookie = 0;    data.txn.code = 0;    if (status) {//status = 0,所以执行else        data.txn.flags = TF_STATUS_CODE;        data.txn.data_size = sizeof(int);        data.txn.offs_size = 0;        data.txn.data = &status;        data.txn.offs = 0;    } else {        data.txn.flags = 0;        data.txn.data_size = reply->data - reply->data0;        data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0);        data.txn.data = reply->data0;        data.txn.offs = reply->offs0;    }    binder_write(bs, &data, sizeof(data));}
        这里reply结构体中保存了一个binder_object结构体:
struct binder_object{    uint32_t type;    uint32_t flags;    void *pointer;    void *cookie;};
       对比下上一节中的flat_binder_object对象:
struct flat_binder_object {/* 8 bytes for large_flat_header. */unsigned longtype;unsigned longflags;/* 8 bytes of data. */union {void*binder;/* local object */signed longhandle;/* remote object */};/* extra data associated with local object */void*cookie;};
       可以说,binder_object就是flat_binder_object对象的copy.
       所以,servicemanager通过BC_REPLY指令返回了一个flat_binder_object对象。
       15. binder_write
int binder_write(struct binder_state *bs, void *data, unsigned len){    struct binder_write_read bwr;    int res;    bwr.write_size = len;    bwr.write_consumed = 0;    bwr.write_buffer = (unsigned) data;    bwr.read_size = 0;    bwr.read_consumed = 0;    bwr.read_buffer = 0;    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//调用ioctl,通过BINDER_WRITE_READ指令返回数据    if (res < 0) {        fprintf(stderr,"binder_write: ioctl failed (%s)\n",                strerror(errno));    }    return res;} 
       调用ioctl以后,binder_write返回返回到14. binder_send_reply函数。根据上一节,对于binder driver的分析,我们知道进入内核态以后,binder_ioctl函数会先调用binder_thread_write函数,然后,binder_thread_write函数又会调用binder_transaction函数,binder_transaction函数则会对于flat_binder_object对象处理,在client进程内创建新的binder_ref指向特定的binder_node,然后,修改flat_binder_object.handle为新创建的binder_ref.desc,然后调用wakeup_event_interruputed函数唤醒Client进程,最后以binder_transaction->binder_thread_write->binder_ioctl的顺序返回到用户态的binder_write函数,binder_write函数再返回到binder_send_reply函数。
       然后,binder_send_reply函数返回,到了10. binder_parse。
       最后再由binder_parse函数返回到9 binder_looper调用ioctl,等待下一个请求。

       现在,又要返回到client进程了。
       前面service manager在内核态唤醒了线程的等待队列,所以wait_event_interrupt函数返回到binder_thread_read函数,然后再返回到binder_ioctl函数,进而返回到用户态的8. IPCThreadState->talkwithDriver()函数:
  do {        ......        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//终于看到了ioctl            err = NO_ERROR;        else            err = -errno;        ......    } while (err == -EINTR); //因为binder driver中是通过wait_event_interrupt函数进行等待,所以系统中断会导致binder_thread_read在读取到数据前返回                             //所以,当err=-EINTR时,需要再次调用ioctl读取数据    ......    if (err >= NO_ERROR) {//根据ioctl的执行结果,调整mIn和mOut的状态        if (bwr.write_consumed > 0) {            if (bwr.write_consumed < (ssize_t)mOut.dataSize())                mOut.remove(0, bwr.write_consumed);            else                mOut.setDataSize(0);        }        if (bwr.read_consumed > 0) {            mIn.setDataSize(bwr.read_consumed);            mIn.setDataPosition(0);        }        ......        return NO_ERROR;    }
       然后,mIn从bwr.read_consumed缓冲区中读取数据。然后,返回到7. IPCThreadState->waitForResponse函数:
        case BR_REPLY:            {                binder_transaction_data tr;                err = mIn.read(&tr, sizeof(tr));                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");                if (err != NO_ERROR) goto finish;                if (reply) {                    if ((tr.flags & TF_STATUS_CODE) == 0) {                        reply->ipcSetDataReference(                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                            tr.data_size,                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                            tr.offsets_size/sizeof(size_t),                            freeBuffer, this);                    } else {                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);                        freeBuffer(NULL,                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                            tr.data_size,                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                            tr.offsets_size/sizeof(size_t), this);                    }                } else {                    freeBuffer(NULL,                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                        tr.data_size,                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                        tr.offsets_size/sizeof(size_t), this);                    continue;                }            }            goto finish;
          本函数内,先从mIn中读取数据到tr结构体内,再从tr结构体内提取数据,赋值给reply结构体。
          再返回到5. IPCThreadState->transaction函数。
          然后,返回到4. BpBinder->transaction函数。
          最后,返回到1. BpServiceManager.checkService函数,调用如下函数:
reply.readStrongBinder();
         以上函数会读取flat_binder_object的handle值,构建BpBinder实例并返回。

addService

          addService用于service向service manager注册。
          1. BpServiceManager.addService:
          virtual status_t addService(const String16& name, const sp<IBinder>& service,            bool allowIsolated)    {        Parcel data, reply;        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());//无论什么请求,interfaceToken都是需要发送的        data.writeString16(name);//service name        data.writeStrongBinder(service);// BBinder地址        data.writeInt32(allowIsolated ? 1 : 0);        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);        return err == NO_ERROR ? reply.readExceptionCode() : err;    }
        因为上一个函数checkService已经解释了数据传递到过程,所以,这里就不再多解释,直接进入到service manager的处理函数:
        2. svcmgr_handler
int svcmgr_handler(struct binder_state *bs,                   struct binder_txn *txn,                   struct binder_io *msg,                   struct binder_io *reply){    struct svcinfo *si;    uint16_t *s;    unsigned len;    void *ptr;    uint32_t strict_policy;    int allow_isolated;    ......    switch(txn->code) {    case SVC_MGR_ADD_SERVICE:        s = bio_get_string16(msg, &len);//service name        ptr = bio_get_ref(msg);//handle        allow_isolated = bio_get_uint32(msg) ? 1 : 0;        if (do_add_service(bs, s, len, ptr, txn->sender_euid, allow_isolated))            return -1;        break;    ......    }    bio_put_uint32(reply, 0);//回复0,表示请求处理成功    return 0;}
        3. do_add_service
int do_add_service(struct binder_state *bs,                   uint16_t *s, unsigned len,                   void *ptr, unsigned uid, int allow_isolated){    struct svcinfo *si;    ......    if (!svc_can_register(uid, s)) {//检查注册的service是否合法        ALOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",             str8(s), ptr, uid);        return -1;    }    si = find_svc(s, len);    if (si) {//service已经注册        if (si->ptr) {            ALOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED, OVERRIDE\n",                 str8(s), ptr, uid);            svcinfo_death(bs, si);        }        si->ptr = ptr;    } else {//尚未注册        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));        if (!si) {            ALOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",                 str8(s), ptr, uid);            return -1;        }        si->ptr = ptr;        si->len = len;        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));        si->name[len] = '\0';        si->death.func = svcinfo_death;        si->death.ptr = si;        si->allow_isolated = allow_isolated;        si->next = svclist;        svclist = si;//保存到svclist    }    binder_acquire(bs, ptr);//增加binder_ref和binder_node的引用计数    binder_link_to_death(bs, ptr, &si->death);//注册binder_ref的死亡通知    return 0;}
        因为service manager对于注册的servie有限制,所以在注册service前,需要前调用svc_can_register函数。
        4. svc_can_register
int svc_can_register(unsigned uid, uint16_t *name){    unsigned n;        if ((uid == 0) || (uid == AID_SYSTEM))//root和system可以注册        return 1;    for (n = 0; n < sizeof(allowed) / sizeof(allowed[0]); n++)//否则必须匹配特定的uid和service name才能注册        if ((uid == allowed[n].uid) && str16eq(name, allowed[n].name))            return 1;    return 0;}
        这里需要交待一下allowed变量:
static struct {    unsigned uid;    const char *name;} allowed[] = {    { AID_MEDIA, "media.audio_flinger" },    { AID_MEDIA, "media.player" },    { AID_MEDIA, "media.camera" },    { AID_MEDIA, "media.audio_policy" },    { AID_DRM,   "drm.drmManager" },    { AID_NFC,   "nfc" },    { AID_BLUETOOTH, "bluetooth" },    { AID_RADIO, "radio.phone" },    { AID_RADIO, "radio.sms" },    { AID_RADIO, "radio.phonesubinfo" },    { AID_RADIO, "radio.simphonebook" },/* TODO: remove after phone services are updated: */    { AID_RADIO, "phone" },    { AID_RADIO, "sip" },    { AID_RADIO, "isms" },    { AID_RADIO, "iphonesubinfo" },    { AID_RADIO, "simphonebook" },    { AID_MEDIA, "common_time.clock" },    { AID_MEDIA, "common_time.config" },};
        所以,如果设备开发者需要添加非system应用的service,需要修改allowed变量。
        现在先假设svc_can_register函数返回1,3. do_add_service函数中,接下来调用find_svc。这个函数之前已经解释过了,所以我们知道如果service已经注册过了,就会返回上次注册的svcinfo信息:
struct svcinfo {    struct svcinfo *next;//下一各svcinfo    void *ptr;//handle    struct binder_death death;    int allow_isolated;    unsigned len;//service name的长度    uint16_t name[0];//service name};
  struct binder_death {    void (*func)(struct binder_state *bs, void *ptr);    void *ptr;};     
      如果si非空,则需要替换handle:
   if (si->ptr) {            ALOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED, OVERRIDE\n",                 str8(s), ptr, uid);            svcinfo_death(bs, si);//释放旧的binder_ref        }        si->ptr = ptr;
       这里需要解释下svcinfo_death函数。
       5. svcinfo_death
void svcinfo_death(struct binder_state *bs, void *ptr){    struct svcinfo *si = ptr;    ALOGI("service '%s' died\n", str8(si->name));    if (si->ptr) {        binder_release(bs, si->ptr);        si->ptr = 0;    }   }
        svcinfo_death函数即上一节中提到的binder死亡通知到处理函数,它需要做得事情是,释放BpBinder。但是ServiceManager并没有使用BpBinder,所以它会直接调用binder_release函数,这个函数提供了更直接的实现方式。
        6. binder_release
void binder_release(struct binder_state *bs, void *ptr){    uint32_t cmd[2];    cmd[0] = BC_RELEASE;    cmd[1] = (uint32_t) ptr;    binder_write(bs, cmd, sizeof(cmd));}
     binder_release函数通过发送BC_RELEASE指令来递减内核态binder_ref的引用计数,当binder_ref的引用计数为0时,binder_ref就会被释放,进而其对应binder_node的引用技术就会减1.同样的,当引用计数为0,binder_node就会被释放。
     让我们回到3. do_add_service,如果service未注册过,则创建新的svcinfo对象:
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));        if (!si) {            ALOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",                 str8(s), ptr, uid);            return -1;        }        si->ptr = ptr;        si->len = len;        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));        si->name[len] = '\0';        si->death.func = svcinfo_death;        si->death.ptr = si;        si->allow_isolated = allow_isolated;        si->next = svclist;        svclist = si;//保存到svclist
void svcinfo_death(struct binder_state *bs, void *ptr){    struct svcinfo *si = ptr;    ALOGI("service '%s' died\n", str8(si->name));    if (si->ptr) {        binder_release(bs, si->ptr);        si->ptr = 0;    }   }
          然后,无论svcinfo是原有的,还是新创建的,下面的操作都是需要执行的:
    binder_acquire(bs, ptr);//增加binder_ref和binder_node的引用计数    binder_link_to_death(bs, ptr, &si->death);//注册binder_ref的死亡通知    return 0;
     7. binder_link_to_death
void binder_link_to_death(struct binder_state *bs, void *ptr, struct binder_death *death){    uint32_t cmd[3];    cmd[0] = BC_REQUEST_DEATH_NOTIFICATION;    cmd[1] = (uint32_t) ptr;    cmd[2] = (uint32_t) death;    binder_write(bs, cmd, sizeof(cmd));}
       发送BC_REQUEST_DEATH_NOTIFICATION指令,以注册死亡通知。然后,service manager在binder_parse函数中处理
int binder_parse(struct binder_state *bs, struct binder_io *bio,                 uint32_t *ptr, uint32_t size, binder_handler func){    int r = 1;    uint32_t *end = ptr + (size / 4);    while (ptr < end) {        uint32_t cmd = *ptr++;        .......        switch(cmd) {               case BR_DEAD_BINDER: {            struct binder_death *death = (void*) *ptr++;//death即刚才注册时写入的death            death->func(bs, death->ptr);//death->func即svcinfo_death函数,而svcinfo_deatch会调用binder_release函数            break;        }        ......        }    }    return r;}
        现在,再回到3. do_add_service函数,该函数返回0,返回到2. svcmgr_handle函数。而svcmgr_handle函数调用
    bio_put_uint32(reply, 0);//回复0,表示请求处理成功
         后返回到binder_parse函数,binder_parse函数调用binder_send_reply把0发送给BpServiceManager。
         最后,BpServiceManager.addService函数就可以读取到返回值:
  status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); //err为servicemanager发送的0        return err == NO_ERROR ? reply.readExceptionCode() : err;

listServices

        listServices用于返回所有注册的service name。
         1. BpServiceManager.listSevices()
    virtual Vector<String16> listServices()    {        Vector<String16> res;        int n = 0;        for (;;) {            Parcel data, reply;            data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());            data.writeInt32(n++);            status_t err = remote()->transact(LIST_SERVICES_TRANSACTION, data, &reply);            if (err != NO_ERROR)                break;            res.add(reply.readString16());        }        return res;    }

         我们可以看到,listService函数是通过循环实现的,每次返回一个service name。同样的,直接看serivcemanager的处理逻辑。

         2. svcmgr_handler

int svcmgr_handler(struct binder_state *bs,                   struct binder_txn *txn,                   struct binder_io *msg,                   struct binder_io *reply){    struct svcinfo *si;    uint16_t *s;    unsigned len;    void *ptr;    uint32_t strict_policy;    int allow_isolated;    ......    switch(txn->code) {    case SVC_MGR_LIST_SERVICES: {        unsigned n = bio_get_uint32(msg);        si = svclist;        while ((n-- > 0) && si)            si = si->next;        if (si) {//service            bio_put_string16(reply, si->name);//返回指定service的service name            return 0;        }        return -1;//n >= 所有注册的service的数量    }    ......    }    ......}
       返回到client进程的BpServiceManager,在退出循环以前,每次循环可以通过
reply.readString16()
       获得一个service name,然后在退出循环后,返回所有获得的service name

总结

  1. 作为Android系统的基础之一,Service Manager的启动顺序在Iinux内核启动以后,Zygote进程启动以前。
  2. ServiceManager为客户端提供了BpServiceManager类作为代理,而BpServiceManager以IServiceManager的形式,为Client提供服务。
  3. 客户端可以通过defaultManagerService函数获得BpServiceManager。

0 0