Binder 通信笔记(native)
来源:互联网 发布:网络打印共享器价格 编辑:程序博客网 时间:2024/05/22 15:56
- 概述
- Service的注册过程
- 打开binder驱动
- 获得service_manager代理类
- addService
- binder序列化实现
- 对binder驱动的读写
- Service与binder驱动的通信
- 开启线程
- 循环监听
- 读取binder驱动
- 解析binder命令
- 解析服务请求命令
- Client端通过binder调用Service服务
- 参考资料
概述
在Android中,最主要的IPC方式就是通过Binder。Binder通信是一个标准的C/S架构。为了方便理解我们可以把Binder通信和网络Http通信作一个类比。
- Binder驱动可以类比成网络驱动
- BpBinder相当于客户端的网络库,它持有一个mHandler,mHandler是一个int类型的变量,这个相当于ip地址,驱动通过这个mHandler知道客户端要发送的服务器。
- BBinder相当于服务端网络框架库,服务端启动服务的时候将自动注册到Binder驱动里面
- IMediaPlayerService就是前后端定义的接口协议,定义好纯虚函数的方法
- BpMediaPlayerService 就是客户端的业务代理实现。客户端将每一个方法调用转换成对应的cmd字符串写道binder驱动里面。
- BnMediaPlayerService是后端的具体业务实现。从驱动读出消息后,通过cmd字段来决定调用具体的业务实现。
- service_manager相当于一个dns服务器 。所有实名的Binder都需要通过addSerivce()方法注册到service_manager里面。然后其他进程需要这个服务的时候可以直接通过名字获取到这个Service的引用和handler值。service_manager在启动的时候将自己设定为 一个为0的Service注册到Binder驱动里面,所以其他进程可以直接通过一个handler=0的引用调用service_manager里面的函数。
下面我们通过media服务看一下一个Service是如何注册到binder驱动中的。
Service的注册过程
Main_mediaserver.cpp
int main(int argc __unused, char** argv){ sp<ProcessState> proc(ProcessState::self());//1 sp<IServiceManager> sm = defaultServiceManager();//2 ALOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate(); MediaPlayerService::instantiate();//3 CameraService::instantiate();#ifdef AUDIO_LISTEN_ENABLED ALOGI("ListenService instantiated"); ListenService::instantiate();#endif AudioPolicyService::instantiate(); SoundTriggerHwService::instantiate(); registerExtensions(); ProcessState::self()->startThreadPool();//4 IPCThreadState::self()->joinThreadPool()//5}
在1中我们通过ProcessState.self()函数获得了一个ProcessState对象的指针保存到了proc中。
打开binder驱动
sp<ProcessState> ProcessState::self(){ Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL) { return gProcess; } gProcess = new ProcessState; return gProcess;}
self()函数定义了一个ProcessState对象,保存在了gProcess中,gProcess是一个全局变量。也就是说一个进程共用同一个ProcessState对象。我们看以下ProcessState对象的构造函数,来了解一下这个类的功能
ProcessState::ProcessState() : mDriverFD(open_driver()) , mVMStart(MAP_FAILED) , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1){ if (mDriverFD >= 0) { // XXX Ideally, there should be a specific define for whether we // have mmap (or whether we could possibly have the kernel module // availabla). mDriverFD = -1; } LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");}
在构造函数中我们通过open_driver()函数初始化了一个mDriverFD变量。
static int open_driver(){ int fd = open("/dev/binder", O_RDWR); //打开binder驱动 if (fd >= 0) { fcntl(fd, F_SETFD, FD_CLOEXEC); int vers = 0; status_t result = ioctl(fd, BINDER_VERSION, &vers); if (result == -1) { ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); close(fd); fd = -1; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { ALOGE("Binder driver protocol does not match user space protocol!"); close(fd); fd = -1; } size_t maxThreads = 15; result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//设置最大进程数 if (result == -1) { ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); } } else { ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno)); } return fd;}
在open_driver()中我们打开了/dev/binder驱动。这样我们就可以通过mDriverFd对binder驱动进行读写
获得service_manager代理类
回到main函数中,在2中我们通过defaultServiceManager()函数获得了service_manager代理引用。
sp<IServiceManager> defaultServiceManager(){ if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); //获取service_manager代理 if (gDefaultServiceManager == NULL) sleep(1); } } return gDefaultServiceManager;}
我们调用了ProcessState:self()->getContextObject(NULL)
来获得service_manager。在前面ProcessState中我们已经打开了binder驱动,现在是和binder驱动进行读写的时候了,我们看一下getContextObject()函数的实现。
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/){ return getStrongProxyForHandle(0);}
继续往下看
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle){ sp<IBinder> result; AutoMutex _l(mLock); handle_entry* e = lookupHandleLocked(handle); if (e != NULL) { // We need to create a new BpBinder if there isn't currently one, OR we // are unable to acquire a weak reference on this current one. See comment // in getWeakProxyForHandle() for more info about this. IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { if (handle == 0) { // Special case for context manager... // The context manager is the only object for which we create // a BpBinder proxy without already holding a reference. // Perform a dummy transaction to ensure the context manager // is registered before we create the first local reference // to it (which will occur when creating the BpBinder). // If a local reference is created for the BpBinder when the // context manager is not present, the driver will fail to // provide a reference to the context manager, but the // driver API does not return status. // // Note that this is not race-free if the context manager // dies while this code runs. // // TODO: add a driver API to wait for context manager, or // stop special casing handle 0 for context manager and add // a driver API to get a handle to the context manager with // proper reference counting. Parcel data; status_t status = IPCThreadState::self()->transact( 0, IBinder::PING_TRANSACTION, data, NULL, 0);//ping一下service_manager,确保它还活着 if (status == DEAD_OBJECT) return NULL; } b = new BpBinder(handle); //生成BpBinder e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { // This little bit of nastyness is to allow us to add a primary // reference to the remote proxy when this team doesn't have one // but another team is sending the handle to us. result.force_set(b); e->refs->decWeak(this); } } return result;}
我们传入的handler为0,在这里我们就创建了一个BpBinder(0)然后返回它。
返回之后我们就回到了defaultServiceManager()
中,代码就变成了下面这个样子:
sp<IServiceManager> defaultServiceManager(){ if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>(BpBinder(0)); //获取servicemanager代理 if (gDefaultServiceManager == NULL) sleep(1); } } return gDefaultServiceManager;}
interface_cast<IServiceManager>
是一个模板函数
template<typename INTERFACE>inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj){ return INTERFACE::asInterface(obj);}
interface_cast<IServiceManager>()
也就是 IServiceManager::asInterface()
。IServiceManager::asInterface()
的实现是通过宏函数来实现,翻译过来的代码如下
android::sp<IServiceManager> IServiceManager::asInterface( const android::sp<android::IBinder>& obj) { android::sp<IServiceManager> intr; if (obj != NULL) { intr = static_cast<IServiceManager*>( obj->queryLocalInterface( IServiceManager::descriptor).get()); if (intr == NULL) { intr = new BpServiceManager(obj); } } return intr; }
最后我们的defaultServiceManager() 如下:
sp<IServiceManager> defaultServiceManager(){ if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL) { gDefaultServiceManager = new BpServiceManager(BpBinder(0)); //获取service_manager代理 if (gDefaultServiceManager == NULL) sleep(1); } } return gDefaultServiceManager;}
这样我们就生成了service_manager的代理类BpServiceManager。我们看一下他们的类关系图
我们的BpServiceManager 继承了BpRefBase类,在初始化时,我们传给了它BpBinder(0),这样BpServiceManager持有了一个handle=0的BpBinder。
同时BpServiceManger还继承了IServiceManager接口,它实现了service_manager的proxy的功能。下面我们通过addService()功能看一下具体实现是怎样的。
addService()
回到media的main函数里面,第三步3MediaPlayerService::instantiate();
在这一步里面注册了MediaPlayerService服务到service_manager中。
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService());}
这里的MediaPlayerService继承了BnMediaPlayerService,它是一个本地服务端Binder实现。刚才我们已经分析过defaultServiceManage()函数,得到了BpServiceManager对象。继续看一下addService()的实现。
virtual status_t Bp::addService(const String16& name, const sp<IBinder>& service, bool allowIsolated){ Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); //将MediaPlayerService写到序列化存储的Parcel中 data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); //调用BpBinder的transact()函数 return err == NO_ERROR ? reply.readExceptionCode() : err;}
binder序列化实现
直接将service写入到一个序列化的Parcel里面,我们看一下一个Binder通信的Service在序列化数据里面是如何存储的。
status_t Parcel::writeStrongBinder(const sp<IBinder>& val){ return flatten_binder(ProcessState::self(), val, this);}
继续
status_t flatten_binder(const sp<ProcessState>& /*proc*/, const sp<IBinder>& binder, Parcel* out){ flat_binder_object obj; obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; if (binder != NULL) { IBinder *local = binder->localBinder(); if (!local) { BpBinder *proxy = binder->remoteBinder(); if (proxy == NULL) { ALOGE("null proxy"); } const int32_t handle = proxy ? proxy->handle() : 0; obj.type = BINDER_TYPE_HANDLE; obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */ obj.handle = handle; obj.cookie = 0; } else { obj.type = BINDER_TYPE_BINDER; obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs()); obj.cookie = reinterpret_cast<uintptr_t>(local); } } else { obj.type = BINDER_TYPE_BINDER; obj.binder = 0; obj.cookie = 0; } return finish_flatten_binder(binder, obj, out);}
此时传入的binder就是MediaPlayerService。MediaPlayerSercie继承了BBinder函数。看一下他们的类关系
BBinder实现了IBinder中的localBinder()函数
BBinder* BBinder::localBinder(){ return this;}
所以我们就将BBinder的地址存储在了obj中。obj是一个flat_binder_object结构体。
struct flat_binder_object { __u32 type; __u32 flags; union { binder_uintptr_t binder; __u32 handle; }; binder_uintptr_t cookie;};
这个结构体用来存储序列化的binder。当binder是一个本地BBinder时,会将他的指针存储在binder和cookie里面。如果是一个代理BpBinder,会将它持有的handle存储在handle中。
对binder驱动的读写
现在我们已经将BnMediaPlayerService写入到了序列化类Parcel里面。下一步是remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
这里的remote()是BpServiceManager的函数,它继承了BpRefBase的实现。BpRefBase中直接返回了mRemote对象。mRemote是我们在开始获得Service初始化的,传入了BpBinder(0)。
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT;}
IPCThreadState::self()返回了一个线程静态变量,也就是每一个线程会存在唯一一个IPCThreadState。继续看一下它的transact()函数实现。
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ //... if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);//将data写入mOut } //... if ((flags & TF_ONE_WAY) == 0) { if (reply) { err = waitForResponse(reply);//写如binder } else { Parcel fakeReply; err = waitForResponse(&fakeReply);//写如binder } } else { err = waitForResponse(NULL, NULL);//写如binder } return err;}
首先执行writeTransactionData()
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer){ binder_transaction_data tr; tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */ tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof(status_t); tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer); tr.offsets_size = 0; tr.data.ptr.offsets = 0; } else { return (mLastError = err); } mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr)); return NO_ERROR;}
我们将handle值,序列化后的binder等值写入到mOut中。然后调用waitForResponse()
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){ int32_t cmd; int32_t err; while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); } else { err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); } } else { freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); continue; } } goto finish; default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } }finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err;}
进入while循环,直接调用talkWithDriver()
status_t IPCThreadState::talkWithDriver(bool doReceive){ if (mProcess->mDriverFD <= 0) { return -EBADF; } binder_write_read bwr; // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data(); // This is what we'll read. if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); if (outAvail != 0) { alog << "Sending commands to driver: " << indent; const void* cmds = (const void*)bwr.write_buffer; const void* end = ((const uint8_t*)cmds)+bwr.write_size; alog << HexDump(cmds, bwr.write_size) << endl; while (cmds < end) cmds = printCommand(alog, cmds); alog << dedent; } alog << "Size of receive buffer: " << bwr.read_size << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { IF_LOG_COMMANDS() { alog << "About to read/write, write size = " << mOut.dataSize() << endl; }#if defined(HAVE_ANDROID_OS) if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//进行系统调用 err = NO_ERROR; else err = -errno;#else err = INVALID_OPERATION;#endif if (mProcess->mDriverFD <= 0) { err = -EBADF; } IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR); IF_LOG_COMMANDS() { alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: " << bwr.write_consumed << " (of " << mOut.dataSize() << "), read consumed: " << bwr.read_consumed << endl; } if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); alog << "Remaining data size: " << mOut.dataSize() << endl; alog << "Received commands from driver: " << indent; const void* cmds = mIn.data(); const void* end = mIn.data() + mIn.dataSize(); alog << HexDump(cmds, mIn.dataSize()) << endl; while (cmds < end) cmds = printReturnCommand(alog, cmds); alog << dedent; } return NO_ERROR; } return err;}
在talkWithDriver()中进行了系统调用ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)
。这个系统调用的功能是对/dev/binder 进行先write再read操作,这个系统调用是阻塞式的。读写缓冲区就是bwr变量,这个一个binder_write_read结构体。
struct binder_write_read { binder_size_t write_size; binder_size_t write_consumed; binder_uintptr_t write_buffer; binder_size_t read_size; binder_size_t read_consumed; binder_uintptr_t read_buffer;};
write_size是写入的缓冲区大小,缓存区指针存放在 write_buffer,在这里等于mOut。
read_size是读取缓冲区大小,缓冲区指针存放在read_buffer在这里等于mIn。
到这,我们通过binder驱动将MediaPlayerService注册到了service_manager里面。当客户端想要使用服务是,通过”media.player”直接从service_manager中获取服务端的代理,就可以使用服务中的功能了 。
Service与binder驱动的通信
作为一个C/S架构的通信,服务端肯定在循环监听,客户端的请求才能及时处理。binder也不例外,binder的service端循环监听binder驱动,当有数据时,便读出数据进行解析执行服务程序。我们还是看一下media的main函数,看一下具体是怎么实现的。
开启线程
int main(int argc __unused, char** argv){ sp<ProcessState> proc(ProcessState::self());//1 sp<IServiceManager> sm = defaultServiceManager();//2 ALOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate(); MediaPlayerService::instantiate();//3 CameraService::instantiate();#ifdef AUDIO_LISTEN_ENABLED ALOGI("ListenService instantiated"); ListenService::instantiate();#endif AudioPolicyService::instantiate(); SoundTriggerHwService::instantiate(); registerExtensions(); ProcessState::self()->startThreadPool();//4 IPCThreadState::self()->joinThreadPool()//5}
在上一节中,我们已经看过了1,2,3的具体执行过程,执行完3之后,MediaPlayerService已经把自己注册到了service_manager中。然后看一下4和5的具体实现。
void ProcessState::startThreadPool(){ AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); }}
加添了一个标志位,然后调用spawnPooledThread();
void ProcessState::spawnPooledThread(bool isMain){ if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s\n", name.string()); sp<Thread> t = new PoolThread(isMain); t->run(name.string()); }}
创建了一个PoolThread,它是一个Android在C++实现的线程类Thread。看一下它的具体 实现
class PoolThread : public Thread{public: PoolThread(bool isMain) : mIsMain(isMain) { }protected: virtual bool threadLoop() { IPCThreadState::self()->joinThreadPool(mIsMain); return false; } const bool mIsMain;};
循环监听
线程创建之后调用threadLoop()函数,执行IPCThreadState::self()->joinThreadPool(mIsMain);
。可以看出main函数最后是有两个线程,最后都走入joinThreadPool() 函数。看一下这个函数的实现。
void IPCThreadState::joinThreadPool(bool isMain){ LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); // This thread may have been spawned by a thread that was in the background // scheduling group, so first we will make sure it is in the foreground // one to avoid performing an initial transaction in the background. set_sched_policy(mMyThreadId, SP_FOREGROUND); status_t result; do { processPendingDerefs(); // now get the next command to be processed, waiting if necessary result = getAndExecuteCommand(); if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting", mProcess->mDriverFD, result); abort(); } // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n", (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false);}
在joinThreadPool()中进入一个while循环,循环中执行了getAndExecuteCommand()
读取binder驱动
status_t IPCThreadState::getAndExecuteCommand(){ status_t result; int32_t cmd; result = talkWithDriver(); if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } result = executeCommand(cmd); // After executing the command, ensure that the thread is returned to the // foreground cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. set_sched_policy(mMyThreadId, SP_FOREGROUND); } return result;}
解析binder命令
在getAndExecuteCommand中,调用talkwithDriver(),这个函数前面我们已经分析过,这和binder驱动进行读写。这时候当有客户端请求时,我们就把他的请求数据读取出来,然后执行executeCommand();
status_t IPCThreadState::executeCommand(int32_t cmd){ BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch (cmd) { case BR_ERROR: result = mIn.readInt32(); break; case BR_OK: break; case BR_ACQUIRE: // ... break; case BR_RELEASE: // ... break; case BR_INCREFS: // ... break; case BR_DECREFS: // ... break; case BR_ATTEMPT_ACQUIRE: // ... break; case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; const int32_t origStrictModePolicy = mStrictModePolicy; const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; mLastTransactionBinderFlags = tr.flags; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. set_sched_policy(mMyThreadId, SP_BACKGROUND); } } //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); Parcel reply; status_t error; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_TRANSACTION thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << " / code " << TypeCode(tr.code) << ": " << indent << buffer << dedent << endl << "Data addr = " << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer) << ", offsets addr=" << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl; } if (tr.target.ptr) {//处理业务请求地地方 sp<BBinder> b((BBinder*)tr.cookie); error = b->transact(tr.code, buffer, &reply, tr.flags); } else { error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); } //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); if (error < NO_ERROR) reply.setError(error); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; mStrictModePolicy = origStrictModePolicy; mLastTransactionBinderFlags = origTransactionBinderFlags; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break; case BR_DEAD_BINDER: { BpBinder *proxy = (BpBinder*)mIn.readPointer(); proxy->sendObituary(); mOut.writeInt32(BC_DEAD_BINDER_DONE); mOut.writePointer((uintptr_t)proxy); } break; case BR_CLEAR_DEATH_NOTIFICATION_DONE: { BpBinder *proxy = (BpBinder*)mIn.readPointer(); proxy->getWeakRefs()->decWeak(proxy); } break; case BR_FINISHED: result = TIMED_OUT; break; case BR_NOOP: break; case BR_SPAWN_LOOPER: mProcess->spawnPooledThread(false); break; default: printf("*** BAD COMMAND %d received from Binder driver\n", cmd); result = UNKNOWN_ERROR; break; } if (result != NO_ERROR) { mLastError = result; } return result;}
这里就是处理不同的请求地方,当此次请求是一个客户端请求时,会走到BR_TRANSACTION的case中。上面的代码我们可以看到这么一句sp<BBinder> b((BBinder*)tr.cookie)
这个是我们从binder驱动中读出的值。在上一节注册service里面,我们是把BnMediaPlayerService指针注册到了binder当中,如果客户端此次请求是对MediaPlayerService服务的 ,那么此处的cookie就是Service代码地址的指针。然后会执行b->transact(tr.code, buffer, &reply, tr.flags);
我们看一下BBinder中transact()的实现。
status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ data.setDataPosition(0); status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: err = onTransact(code, data, reply, flags); break; } if (reply != NULL) { reply->setDataPosition(0); } return err;}
解析服务请求命令
然后会调用onTransact()函数,onTransact() 是一个纯虚函数,每一个Service会实现 这个函数。对于MediaPlayerService服务,其实现在IMediaPlayerService.cpp文件中。
// ----------------------------------------------------------------------status_t BnMediaPlayerService::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ switch (code) { case CREATE: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaPlayerClient> client = interface_cast<IMediaPlayerClient>(data.readStrongBinder()); int audioSessionId = data.readInt32(); sp<IMediaPlayer> player = create(client, audioSessionId); reply->writeStrongBinder(player->asBinder()); return NO_ERROR; } break; case DECODE_URL: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaHTTPService> httpService; if (data.readInt32()) { httpService = interface_cast<IMediaHTTPService>(data.readStrongBinder()); } const char* url = data.readCString(); sp<IMemoryHeap> heap = interface_cast<IMemoryHeap>(data.readStrongBinder()); uint32_t sampleRate; int numChannels; audio_format_t format; size_t size; status_t status = decode(httpService, url, &sampleRate, &numChannels, &format, heap, &size); reply->writeInt32(status); if (status == NO_ERROR) { reply->writeInt32(sampleRate); reply->writeInt32(numChannels); reply->writeInt32((int32_t)format); reply->writeInt32((int32_t)size); } return NO_ERROR; } break; case DECODE_FD: { CHECK_INTERFACE(IMediaPlayerService, data, reply); int fd = dup(data.readFileDescriptor()); int64_t offset = data.readInt64(); int64_t length = data.readInt64(); sp<IMemoryHeap> heap = interface_cast<IMemoryHeap>(data.readStrongBinder()); uint32_t sampleRate; int numChannels; audio_format_t format; size_t size; status_t status = decode(fd, offset, length, &sampleRate, &numChannels, &format, heap, &size); reply->writeInt32(status); if (status == NO_ERROR) { reply->writeInt32(sampleRate); reply->writeInt32(numChannels); reply->writeInt32((int32_t)format); reply->writeInt32((int32_t)size); } return NO_ERROR; } break; case CREATE_MEDIA_RECORDER: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaRecorder> recorder = createMediaRecorder(); reply->writeStrongBinder(recorder->asBinder()); return NO_ERROR; } break; case CREATE_METADATA_RETRIEVER: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaMetadataRetriever> retriever = createMetadataRetriever(); reply->writeStrongBinder(retriever->asBinder()); return NO_ERROR; } break; case GET_OMX: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IOMX> omx = getOMX(); reply->writeStrongBinder(omx->asBinder()); return NO_ERROR; } break; case MAKE_CRYPTO: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<ICrypto> crypto = makeCrypto(); reply->writeStrongBinder(crypto->asBinder()); return NO_ERROR; } break; case MAKE_DRM: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IDrm> drm = makeDrm(); reply->writeStrongBinder(drm->asBinder()); return NO_ERROR; } break; case MAKE_HDCP: { CHECK_INTERFACE(IMediaPlayerService, data, reply); bool createEncryptionModule = data.readInt32(); sp<IHDCP> hdcp = makeHDCP(createEncryptionModule); reply->writeStrongBinder(hdcp->asBinder()); return NO_ERROR; } break; case ADD_BATTERY_DATA: { CHECK_INTERFACE(IMediaPlayerService, data, reply); uint32_t params = data.readInt32(); addBatteryData(params); return NO_ERROR; } break; case PULL_BATTERY_DATA: { CHECK_INTERFACE(IMediaPlayerService, data, reply); pullBatteryData(reply); return NO_ERROR; } break; case LISTEN_FOR_REMOTE_DISPLAY: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IRemoteDisplayClient> client( interface_cast<IRemoteDisplayClient>(data.readStrongBinder())); String8 iface(data.readString8()); sp<IRemoteDisplay> display(listenForRemoteDisplay(client, iface)); reply->writeStrongBinder(display->asBinder()); return NO_ERROR; } break; case GET_CODEC_LIST: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaCodecList> mcl = getCodecList(); reply->writeStrongBinder(mcl->asBinder()); return NO_ERROR; } break; default: return BBinder::onTransact(code, data, reply, flags); }}
当客户端调用不同的函数时,就会发送 对应的cmd请求,服务端就会跟据这个cmd调用不同的具体实现。
这样对于一个Service完整的binder实现,我们这里就分析完了。总结一下:
- 首选获得managerService的代理,managerService是一个特殊的binder服务,他将自己注册为 binder驱动的主服务。其他客户端直接通过handle=0像binder驱动发送请求,驱动就会把我们的请求发送给manageService。
- 获得managerService代理之后,我们将自己注册到binder驱动和managerService里面。binder驱动帮我们生成一个handle整型值和我们的Service的引用传送到managerSerice里面。其他进程可以直接通过名字找到我们的服务
- 注册完服务后,Service开启线程监听binder驱动是否有数据,当有数据读出时,就根据数据中的服务地址和服务的函数对应的名字,执行相应的服务。
Client端通过binder调用Service服务
在前面分析Serivce与binder的过程中,其实MediaPlayerService既扮演了Service端也扮演了client端。在addService的过程中,MediaPlayerService是作service_manager的客户端,使用了service_manager中的服务。当其他的 binder客户端服务在进行binder通信时,和service_manager基本相同,只有在getService是不同的。service_manager的是直接通过一个handle=0的BpBinder初始化的。而普通的服务,handle值是通过binder驱动中读取出来的。在这一节我们分析一下ServiceManger的getService过程。
virtual sp<IBinder> getService(const String16& name) const { unsigned n; for (n = 0; n < 5; n++){ sp<IBinder> svc = checkService(name); if (svc != NULL) return svc; ALOGI("Waiting for service %s...\n", String8(name).string()); sleep(1); } return NULL; }
直接调用checkService();
virtual sp<IBinder> checkService( const String16& name) const { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply); return reply.readStrongBinder(); }
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply)在上一节已经分析过,从binder驱动读取数据到reply当中,下面分析readStrongBinder()函数。
sp<IBinder> Parcel::readStrongBinder() const{ sp<IBinder> val; unflatten_binder(ProcessState::self(), *this, &val); return val;}
unflatten_binder()将序列化中的binder转换成对象
status_t unflatten_binder(const sp<ProcessState>& proc, const Parcel& in, sp<IBinder>* out){ const flat_binder_object* flat = in.readObject(false); if (flat) { switch (flat->type) { case BINDER_TYPE_BINDER: *out = reinterpret_cast<IBinder*>(flat->cookie); return finish_unflatten_binder(NULL, *flat, in); case BINDER_TYPE_HANDLE: *out = proc->getStrongProxyForHandle(flat->handle); return finish_unflatten_binder( static_cast<BpBinder*>(out->get()), *flat, in); } } return BAD_TYPE;}
此处调用了getStrongProxyForHandle(),这个函数我们在上一节获取serviceManger代理的时候就用到了,上次直接传入0作为handle参数。这次的handle参数是从 binder驱动中读取的值。
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle){ sp<IBinder> result; AutoMutex _l(mLock); handle_entry* e = lookupHandleLocked(handle); if (e != NULL) { IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { if (handle == 0) { Parcel data; status_t status = IPCThreadState::self()->transact( 0, IBinder::PING_TRANSACTION, data, NULL, 0); if (status == DEAD_OBJECT) return NULL; } b = new BpBinder(handle); e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { result.force_set(b); e->refs->decWeak(this); } } return result;}
直接返回BpBinder(handle)给getService()。客户端再将BpBinder封装在对应的BpXXXService当中,就可以和Service端通信了。
这样我们就分析完了native的Binder实现过程。
参考资料
http://blog.csdn.net/universus/article/details/6211589
http://blog.csdn.net/luoshengyang/article/details/6618363
http://blog.csdn.net/innost/article/details/47208049
- Binder 通信笔记(native)
- Binder 通信笔记(Java)
- Binder(native层)
- binder 进程间通信笔记
- 基于 Binder 的跨进程通信以及 Service(一):Native 层
- Android中实现native服务利用binder与应用通信
- binder 通信
- Binder通信
- binder通信
- Binder通信
- 【Android--Binder】关于通信机制之Binder机制(上)
- Binder通信二(MediaService理解Binder机制)
- Binder通信三(Binder设计与实现)
- Binder In Native
- Native Binder 实例
- Android Native Binder
- native binder相关类
- Native Binder通讯
- block:解决成员变量的循环引用的问题
- Hadoop解决内存受限问题
- Java集合之AbstractMap
- leetcode-53 maximum-subarray
- iOS真机测试中出现dyld`dyld_fatal_error错误
- Binder 通信笔记(native)
- Android 一个例子来解读Fragment实现横竖屏不同的布局
- linux查找目录下的所有文件中是否含有某个字符串
- C++ 生成dll的入口函数
- android CoordinatorLayout使用
- spring aop
- Android中获取应用程序(包)的信息-----PackageManager的使用(一)
- 百度地图旋转时不更改地图上的点位
- Windows远程桌面如何设置任意大小的分辨率?