Binder总结

来源:互联网 发布:博雅软件怎么用 编辑:程序博客网 时间:2024/06/08 07:28

Binder机制在android中无处不在,系统服务管理应用,系统服务之间,应用之间都使用了Binder完成跨进程通信。所以Binder到底是怎么实现跨进程通信的呢?为什么说Binder只有一次数据拷贝呢?为什么客户端通过句柄handle创建BpBinder就可以找到服务端,然后将数据传输给服务端呢?这是我学习Binder前的疑问。
这篇文章会跳过一些Binder的基础学习,主要解答以上一些疑问。
我们从Service添加到servicemanager中说起:
1、首先是客户端调用BpServiceManager(servicemanager的代理)的addservice方法,如下:

virtual status_t addService(const String16& name, const sp<IBinder>& service,            bool allowIsolated)    {        Parcel data, reply;        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());        data.writeString16(name);        data.writeStrongBinder(service);        data.writeInt32(allowIsolated ? 1 : 0);        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);        return err == NO_ERROR ? reply.readExceptionCode() : err;    }

其中会将service名字和service分别写入Parcel中,我们来看下writeStrongBinder的实现:

status_t Parcel::writeStrongBinder(const sp<IBinder>& val){    return flatten_binder(ProcessState::self(), val, this);}status_t flatten_binder(const sp<ProcessState>& /*proc*/,    const wp<IBinder>& binder, Parcel* out){    flat_binder_object obj;    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;    if (binder != NULL) {        sp<IBinder> real = binder.promote();        if (real != NULL) {            IBinder *local = real->localBinder();            if (!local) {                ......            } else {                obj.type = BINDER_TYPE_WEAK_BINDER;                obj.binder = reinterpret_cast<uintptr_t>(binder.get_refs());                obj.cookie = reinterpret_cast<uintptr_t>(binder.unsafe_get());            }            return finish_flatten_binder(real, obj, out);        }        ......}inline static status_t finish_flatten_binder(    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out){    return out->writeObject(flat, false);}status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData){    const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;    const bool enoughObjects = mObjectsSize < mObjectsCapacity;    if (enoughData && enoughObjects) {restart_write:        *reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;        // remember if it's a file descriptor        if (val.type == BINDER_TYPE_FD) {            if (!mAllowFds) {                // fail before modifying our object index                return FDS_NOT_ALLOWED;            }            mHasFds = mFdsKnown = true;        }        // Need to write meta-data?        if (nullMetaData || val.binder != 0) {            mObjects[mObjectsSize] = mDataPos;            acquire_object(ProcessState::self(), val, this, &mOpenAshmemSize);            mObjectsSize++;        }        return finishWrite(sizeof(flat_binder_object));    }

这里创建了一个结构体flat_binder_object,将成员type,binder,cookie分别赋值,注意这个结构体在内核中也用到了。之后我们会把这个结构体写入Parcel中,这里需要注意有一个偏移数组mObjects,通过mObjectsSize找到flat_binder_object在数组mData数组中的索引,我们在内核中也会用到它来查找flat_binder_object。数据准备好了,接下来就是通过BpBinder,再通过IPCThreadState调用transact方法,这里又对数据有一层封装:

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer){    binder_transaction_data tr;    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */    tr.target.handle = handle;    tr.code = code;    tr.flags = binderFlags;    tr.cookie = 0;    tr.sender_pid = 0;    tr.sender_euid = 0;    const status_t err = data.errorCheck();    if (err == NO_ERROR) {        tr.data_size = data.ipcDataSize();        tr.data.ptr.buffer = data.ipcData();        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);        tr.data.ptr.offsets = data.ipcObjects();    } else if (statusBuffer) {        tr.flags |= TF_STATUS_CODE;        *statusBuffer = err;        tr.data_size = sizeof(status_t);        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);        tr.offsets_size = 0;        tr.data.ptr.offsets = 0;    } else {        return (mLastError = err);    }    mOut.writeInt32(cmd);    mOut.write(&tr, sizeof(tr));    return NO_ERROR;}status_t IPCThreadState::talkWithDriver(bool doReceive){    if (mProcess->mDriverFD <= 0) {        return -EBADF;    }    binder_write_read bwr;    // Is the read buffer empty?    const bool needRead = mIn.dataPosition() >= mIn.dataSize();    // We don't want to write anything if we are still reading    // from data left in the input buffer and the caller    // has requested to read the next data.    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;    bwr.write_size = outAvail;    bwr.write_buffer = (uintptr_t)mOut.data();    // This is what we'll read.    if (doReceive && needRead) {        bwr.read_size = mIn.dataCapacity();        bwr.read_buffer = (uintptr_t)mIn.data();    } else {        bwr.read_size = 0;        bwr.read_buffer = 0;    }    IF_LOG_COMMANDS() {        TextOutput::Bundle _b(alog);        if (outAvail != 0) {            alog << "Sending commands to driver: " << indent;            const void* cmds = (const void*)bwr.write_buffer;            const void* end = ((const uint8_t*)cmds)+bwr.write_size;            alog << HexDump(cmds, bwr.write_size) << endl;            while (cmds < end) cmds = printCommand(alog, cmds);            alog << dedent;        }        alog << "Size of receive buffer: " << bwr.read_size            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;    }    // Return immediately if there is nothing to do.    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;    bwr.write_consumed = 0;    bwr.read_consumed = 0;    status_t err;    do {        IF_LOG_COMMANDS() {            alog << "About to read/write, write size = " << mOut.dataSize() << endl;        }#if defined(__ANDROID__)        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)            err = NO_ERROR;        else            err = -errno;#else        err = INVALID_OPERATION;#endif        if (mProcess->mDriverFD <= 0) {            err = -EBADF;        }        IF_LOG_COMMANDS() {            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;        }    } while (err == -EINTR);    IF_LOG_COMMANDS() {        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "            << bwr.write_consumed << " (of " << mOut.dataSize()                        << "), read consumed: " << bwr.read_consumed << endl;    }    if (err >= NO_ERROR) {        if (bwr.write_consumed > 0) {            if (bwr.write_consumed < mOut.dataSize())                mOut.remove(0, bwr.write_consumed);            else                mOut.setDataSize(0);        }        if (bwr.read_consumed > 0) {            mIn.setDataSize(bwr.read_consumed);            mIn.setDataPosition(0);        }        IF_LOG_COMMANDS() {            TextOutput::Bundle _b(alog);            alog << "Remaining data size: " << mOut.dataSize() << endl;            alog << "Received commands from driver: " << indent;            const void* cmds = mIn.data();            const void* end = mIn.data() + mIn.dataSize();            alog << HexDump(cmds, mIn.dataSize()) << endl;            while (cmds < end) cmds = printReturnCommand(alog, cmds);            alog << dedent;        }        return NO_ERROR;    }    return err;}

新建了一个binder_transaction_data结构体tr,取出上述在Parcel存入的内容赋值(地址)给tr成员。最终通过talkWithDriver调用ioctl传入数据到binder驱动(内核)。注意在该函数又进行了一次数据封装,创建了binder_write_read结构体bwr,其中write_buffer保存的就是上面的binder_transaction_data结构体tr。

2、接下来看下binder文件的binder_ioctrl方法,cmd为BINDER_WRITE_READ,直接调用binder_ioctl_write_read如下:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){    int ret;    struct binder_proc *proc = filp->private_data;    struct binder_thread *thread;    unsigned int size = _IOC_SIZE(cmd);    void __user *ubuf = (void __user *)arg;    /*pr_info("binder_ioctl: %d:%d %x %lx\n",            proc->pid, current->pid, cmd, arg);*/    trace_binder_ioctl(cmd, arg);    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);    if (ret)        goto err_unlocked;    binder_lock(__func__);    thread = binder_get_thread(proc);    if (thread == NULL) {        ret = -ENOMEM;        goto err;    }    switch (cmd) {    case BINDER_WRITE_READ:        ret = binder_ioctl_write_read(filp, cmd, arg, thread);        if (ret)            goto err;        break;......}static int binder_ioctl_write_read(struct file *filp,                unsigned int cmd, unsigned long arg,                struct binder_thread *thread){    int ret = 0;    struct binder_proc *proc = filp->private_data;    unsigned int size = _IOC_SIZE(cmd);    void __user *ubuf = (void __user *)arg;    struct binder_write_read bwr;    if (size != sizeof(struct binder_write_read)) {        ret = -EINVAL;        goto out;    }    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {        ret = -EFAULT;        goto out;    }    binder_debug(BINDER_DEBUG_READ_WRITE,             "%d:%d write %lld at %016llx, read %lld at %016llx\n",             proc->pid, thread->pid,             (u64)bwr.write_size, (u64)bwr.write_buffer,             (u64)bwr.read_size, (u64)bwr.read_buffer);    if (bwr.write_size > 0) {        ret = binder_thread_write(proc, thread,                      bwr.write_buffer,                      bwr.write_size,                      &bwr.write_consumed);        trace_binder_write_done(ret);        if (ret < 0) {            bwr.read_consumed = 0;            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))                ret = -EFAULT;            goto out;        }    }......}

上面代码中有一次拷贝,copy_from_user,把用户进程传入的binder_write_read数据拷贝给内核变量bwr,但是并没有实际数据的拷贝,就是把内部成员所指地址拷贝过来。
这里并没有使用内存映射方式,原因是之前mmap的flag是PROT_READ,用户空间进程对内核分配的物理内存只有读的权限。
接下来调用binder_thread_write,cmd为BC_TRANSATION,ptr实际就是write_buffer即上文的binder_transaction_data,仍然为用户空间的地址。在BC_TRANSACTION中再次调用copy_from_user,将真正的数据拷贝给内核变量binder_transaction_data tr。

......        case BC_TRANSACTION:        case BC_REPLY: {            struct binder_transaction_data tr;            if (copy_from_user(&tr, ptr, sizeof(tr)))                return -EFAULT;            ptr += sizeof(tr);            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);            break;        }......

至此,数据已经完成从用户空间到内核空间的拷贝。

3、然后进入binder_transaction函数中,代码如下:

    if (reply) {        ......    } else {        if (tr->target.handle) {            struct binder_ref *ref;            ref = binder_get_ref(proc, tr->target.handle);            if (ref == NULL) {                binder_user_error("%d:%d got transaction to invalid handle\n",                    proc->pid, thread->pid);                return_error = BR_FAILED_REPLY;                goto err_invalid_target_handle;            }            target_node = ref->node;        } else {            target_node = binder_context_mgr_node;            if (target_node == NULL) {                return_error = BR_DEAD_REPLY;                goto err_no_context_mgr_node;            }        }        e->to_node = target_node->debug_id;        target_proc = target_node->proc;        if (target_proc == NULL) {            return_error = BR_DEAD_REPLY;            goto err_dead_binder;        }        if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) {            return_error = BR_FAILED_REPLY;            goto err_invalid_target_handle;        }        if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {            struct binder_transaction *tmp;            tmp = thread->transaction_stack;            if (tmp->to_thread != thread) {                binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n",                    proc->pid, thread->pid, tmp->debug_id,                    tmp->to_proc ? tmp->to_proc->pid : 0,                    tmp->to_thread ?                    tmp->to_thread->pid : 0);                return_error = BR_FAILED_REPLY;                goto err_bad_call_stack;            }            while (tmp) {                if (tmp->from && tmp->from->proc == target_proc)                    target_thread = tmp->from;                tmp = tmp->from_parent;            }        }    }    if (target_thread) {        e->to_thread = target_thread->pid;        target_list = &target_thread->todo;        target_wait = &target_thread->wait;    } else {        target_list = &target_proc->todo;        target_wait = &target_proc->wait;    }    e->to_proc = target_proc->pid;    /* TODO: reuse incoming transaction for reply */    t = kzalloc(sizeof(*t), GFP_KERNEL);    if (t == NULL) {        return_error = BR_FAILED_REPLY;        goto err_alloc_t_failed;    }    binder_stats_created(BINDER_STAT_TRANSACTION);    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);    if (tcomplete == NULL) {        return_error = BR_FAILED_REPLY;        goto err_alloc_tcomplete_failed;    }    binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);    t->debug_id = ++binder_last_id;    e->debug_id = t->debug_id;    if (reply)        binder_debug(BINDER_DEBUG_TRANSACTION,                 "%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld\n",                 proc->pid, thread->pid, t->debug_id,                 target_proc->pid, target_thread->pid,                 (u64)tr->data.ptr.buffer,                 (u64)tr->data.ptr.offsets,                 (u64)tr->data_size, (u64)tr->offsets_size);    else        binder_debug(BINDER_DEBUG_TRANSACTION,                 "%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld\n",                 proc->pid, thread->pid, t->debug_id,                 target_proc->pid, target_node->debug_id,                 (u64)tr->data.ptr.buffer,                 (u64)tr->data.ptr.offsets,                 (u64)tr->data_size, (u64)tr->offsets_size);    if (!reply && !(tr->flags & TF_ONE_WAY))        t->from = thread;    else        t->from = NULL;    t->sender_euid = task_euid(proc->tsk);    t->to_proc = target_proc;    t->to_thread = target_thread;    t->code = tr->code;    t->flags = tr->flags;    t->priority = task_nice(current);    trace_binder_transaction(reply, t, target_node);    t->buffer = binder_alloc_buf(target_proc, tr->data_size,        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));    if (t->buffer == NULL) {        return_error = BR_FAILED_REPLY;        goto err_binder_alloc_buf_failed;    }    t->buffer->allow_user_free = 0;    t->buffer->debug_id = t->debug_id;    t->buffer->transaction = t;    t->buffer->target_node = target_node;    trace_binder_transaction_alloc_buf(t->buffer);    if (target_node)        binder_inc_node(target_node, 1, 0, NULL);    offp = (binder_size_t *)(t->buffer->data +                 ALIGN(tr->data_size, sizeof(void *)));    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)               tr->data.ptr.buffer, tr->data_size)) {        binder_user_error("%d:%d got transaction with invalid data ptr\n",                proc->pid, thread->pid);        return_error = BR_FAILED_REPLY;        goto err_copy_data_failed;    }    if (copy_from_user(offp, (const void __user *)(uintptr_t)               tr->data.ptr.offsets, tr->offsets_size)) {        binder_user_error("%d:%d got transaction with invalid offsets ptr\n",                proc->pid, thread->pid);        return_error = BR_FAILED_REPLY;        goto err_copy_data_failed;    }

reply为false,直接跳过,接下来就是根据之前BpBinder中的handle找到对应的binder引用binder_ref,再通过bind_ref找到binder实体target_node和binder进程target_proc。接下来查找是否有空闲的thread,这里是一个优化,找正在等待回复的空闲线程。接下来分配一个binder_transaction t,这个事务一般为发送方创建,接收方处理。接下来来到关键点,给t->buffer分配空间,并且是通过target_proc分配,代表的是target_proc为该事务提供的内核空间地址。然后,计算出存放flat_binder_object的偏移数组的首地址。接下来的两个copy_from_user分别将内核中存放的数据和偏移数组数据拷贝到用户进程的内核缓存区。
下面就是对binder引用,binder实体,binder本地对象的操作:

......    off_end = (void *)offp + tr->offsets_size;    off_min = 0;    for (; offp < off_end; offp++) {        struct flat_binder_object *fp;        if (*offp > t->buffer->data_size - sizeof(*fp) ||            *offp < off_min ||            t->buffer->data_size < sizeof(*fp) ||            !IS_ALIGNED(*offp, sizeof(u32))) {            binder_user_error("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n",                      proc->pid, thread->pid, (u64)*offp,                      (u64)off_min,                      (u64)(t->buffer->data_size -                      sizeof(*fp)));            return_error = BR_FAILED_REPLY;            goto err_bad_offset;        }        fp = (struct flat_binder_object *)(t->buffer->data + *offp);        off_min = *offp + sizeof(struct flat_binder_object);        switch (fp->type) {        case BINDER_TYPE_BINDER:        case BINDER_TYPE_WEAK_BINDER: {            struct binder_ref *ref;            struct binder_node *node = binder_get_node(proc, fp->binder);            if (node == NULL) {                node = binder_new_node(proc, fp->binder, fp->cookie);                if (node == NULL) {                    return_error = BR_FAILED_REPLY;                    goto err_binder_new_node_failed;                }                node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);            }            if (fp->cookie != node->cookie) {                binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n",                    proc->pid, thread->pid,                    (u64)fp->binder, node->debug_id,                    (u64)fp->cookie, (u64)node->cookie);                return_error = BR_FAILED_REPLY;                goto err_binder_get_ref_for_node_failed;            }    if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {                return_error = BR_FAILED_REPLY;                goto err_binder_get_ref_for_node_failed;            }            ref = binder_get_ref_for_node(target_proc, node);            if (ref == NULL) {                return_error = BR_FAILED_REPLY;                goto err_binder_get_ref_for_node_failed;            }            if (fp->type == BINDER_TYPE_BINDER)                fp->type = BINDER_TYPE_HANDLE;            else                fp->type = BINDER_TYPE_WEAK_HANDLE;            fp->handle = ref->desc;            binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,                       &thread->todo);            trace_binder_transaction_node_to_ref(t, node, ref);            binder_debug(BINDER_DEBUG_TRANSACTION,                     "        node %d u%016llx -> ref %d desc %d\n",                     node->debug_id, (u64)node->ptr,                     ref->debug_id, ref->desc);        } break;......

我们直接来到BINDER_TYPE_BINDER,通过偏移数组可以拿到保存binder本地对象的结构体flat_binder_object,实际就是之前在addservice时传入的service。
我们可以看到,首先是通过binder本地对象fb->binder和源进程proc获取binder实体,如果没有,则会创建一个新的binder实体,binder实体与binder本地对象和源进程proc相关。
接下来通过目的进程target_proc和binder实体获取binder引用,如果有则直接返回,没有的话就创建。这里需要注意的是在for循环中,查找一个未被使用的最小句柄值给最新创建的binder引用new_ref->desc,然后将新binder引用存入到proc的引用红黑树中,之后会将该desc赋值给fp->handle。我们看下这几个对象的关系:
这里写图片描述
到这里,我们是不是已经可以明白如何通过handle找到对应的本地对象了呢。

回到binder_transaction中,我们在之前有创建两个事务t和tcomplete并加入到对应进程的工作队列中,它们分别在目标进程(线程)和原进程(线程)中处理。我们来看下target_proc的binder_thread_read处理:

while (1) {        uint32_t cmd;        struct binder_transaction_data tr;        struct binder_work *w;        struct binder_transaction *t = NULL;......        switch (w->type) {        case BINDER_WORK_TRANSACTION: {            t = container_of(w, struct binder_transaction, work);        } break;......if (t->buffer->target_node) {            struct binder_node *target_node = t->buffer->target_node;            tr.target.ptr = target_node->ptr;            tr.cookie =  target_node->cookie;            t->saved_priority = task_nice(current);            if (t->priority < target_node->min_priority &&                !(t->flags & TF_ONE_WAY))                binder_set_nice(t->priority);            else if (!(t->flags & TF_ONE_WAY) ||                 t->saved_priority > target_node->min_priority)                binder_set_nice(target_node->min_priority);            cmd = BR_TRANSACTION;        } else {            tr.target.ptr = 0;            tr.cookie = 0;            cmd = BR_REPLY;        }        tr.code = t->code;        tr.flags = t->flags;        tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);        if (t->from) {            struct task_struct *sender = t->from->proc->tsk;            tr.sender_pid = task_tgid_nr_ns(sender,                            task_active_pid_ns(current));        } else {            tr.sender_pid = 0;        }        tr.data_size = t->buffer->data_size;        tr.offsets_size = t->buffer->offsets_size;        tr.data.ptr.buffer = (binder_uintptr_t)(                    (uintptr_t)t->buffer->data +                    proc->user_buffer_offset);        tr.data.ptr.offsets = tr.data.ptr.buffer +                    ALIGN(t->buffer->data_size,                        sizeof(void *));        if (put_user(cmd, (uint32_t __user *)ptr))            return -EFAULT;        ptr += sizeof(uint32_t);        if (copy_to_user(ptr, &tr, sizeof(tr)))            return -EFAULT;        ptr += sizeof(tr);        trace_binder_transaction_received(t);        binder_stat_br(proc, thread, cmd);        binder_debug(BINDER_DEBUG_TRANSACTION,                 "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",                 proc->pid, thread->pid,                 (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :                 "BR_REPLY",                 t->debug_id, t->from ? t->from->proc->pid : 0,                 t->from ? t->from->pid : 0, cmd,                 t->buffer->data_size, t->buffer->offsets_size,                 (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);        list_del(&t->work.entry);        t->buffer->allow_user_free = 1;        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {            t->to_parent = thread->transaction_stack;            t->to_thread = thread;            thread->transaction_stack = t;        } else {            t->buffer->transaction = NULL;            kfree(t);            binder_stats_deleted(BINDER_STAT_TRANSACTION);        }        break;    }done:    *consumed = ptr - buffer;    if (proc->requested_threads + proc->ready_threads == 0 &&        proc->requested_threads_started < proc->max_threads &&        (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |         BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */         /*spawn a new thread if we leave this out */) {        proc->requested_threads++;        binder_debug(BINDER_DEBUG_THREADS,                 "%d:%d BR_SPAWN_LOOPER\n",                 proc->pid, thread->pid);        if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))            return -EFAULT;        binder_stat_br(proc, thread, BR_SPAWN_LOOPER);    }    return 0;

在BINDER_WORK_TRANSACTION中,会从之前的工作队列中取出事务t,然后从t中取出对应的值给新创建binder_transaction_data tr,这个结构体是不是很熟悉,在之前parcel打包的时候传进来的不正是这样一个结构体。注意这里给tr.data.ptr.buffer的地址是目的进程内核空间地址加上地址偏移值,所以直接指向目的进程的用户空间地址了,同时tr.data.ptr.offsets指向偏移数组的地址。接着把内核空间的数据及cmd BR_TRANSACTION也写入用户空间的缓冲区中,这次是通过内存映射把数据传到用户空间。

4、由于servicemanager的binder_thread_read被唤醒且用户空间的缓冲区已经填入数据,所以binder_loop函数继续往下执行。完成主要功能在svcmgr_handler中:

......   switch(txn->code) {    case SVC_MGR_GET_SERVICE:    case SVC_MGR_CHECK_SERVICE:        s = bio_get_string16(msg, &len);        if (s == NULL) {            return -1;        }        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);        if (!handle)            break;        bio_put_ref(reply, handle);        return 0;    case SVC_MGR_ADD_SERVICE:        s = bio_get_string16(msg, &len);        if (s == NULL) {            return -1;        }        handle = bio_get_ref(msg);        allow_isolated = bio_get_uint32(msg) ? 1 : 0;        if (do_add_service(bs, s, len, handle, txn->sender_euid,            allow_isolated, txn->sender_pid))            return -1;        break;......

我们看到有SVC_MGR_CHECK_SERVICE返回handle,SVC_MGR_ADD_SERVICE将handle和service name等信息保存到svclist中。
到这里,我们已经形成了一个binder通信的模型,颜色对应关系方便理解,getService的方式与以上方式类似,这里就不分析了。
这里写图片描述

总结:
1、客户端BpBinder中的handler(句柄)实际是由binder驱动提供,在不同的进程中同一个服务会对应不同的值。
针对同一个服务,servicemanager中的handler与客户端获取的handler并不一致,当客户端获取系统服务时,该handler在binder驱动会被替换。
因为在addService时,会创建Binder实体对象node,node保存了本地Binder的弱引用地址,并且node保存在源进程proc(服务所在进程)的红黑树中。
同时,实体对象又保存在引用对象的中,并且Binder驱动会为引用对象生成handler的值,并将该引用对象也保存在目标进程proc(servicemanager所在进程)的红黑树中。
最终生成的handler值会传给servicemanager保管。
在getService时,servicemanager会返回之前service对应的handler给Binder驱动,Binder驱动接收到BC_REPLY后在binder_transaction中根据servicemanager的进程proc和handler
获取对应的引用对象,并根据客户端的进程proc和引用对象的node生成一个新的引用对象和新的handler,并保存在客户端进程proc的红黑树中。
所以最终客户端获取到的binder实际就是这个进程独有的,最终映射到同一个node。

2、binder调用从客户端到服务端只有一次数据拷贝。
Binder采用的内存映射的方式,进程与内核完成数据传递实际是通过两块虚拟内存映射到同一物理内存实现的,而真正的数据拷贝(buffer)发生在源进程的内存缓存区到内核缓存区的过程。

3、servicemanager是用来管理所以注册的服务的,一旦客户端获取所需服务代理后,使用该服务代理进行数据传递,就与servicemanger脱离关系。

4、binder调用中有同步和异步操作。在异步方式(oneway)时,客户端传入数据到内核空间即可返回,不需要等返回值。

原创粉丝点击