浅谈Android system_service 注册Service、APP获取并访问服务(PMS:PowerManagerService)为例

来源:互联网 发布:网络电话隐藏号码软件 编辑:程序博客网 时间:2024/05/19 13:45
一、Binder驱动程序、SMS守护进程的启动{1、binder-文件系统节点{1、/sys/kernel/debug/bind1er/proc每一个使用binder进程通信的进程,在此目录下都对应有一个文件,以进程ID命名通过他们可以读取到各个进程的Binder线程池、binder实体对象、引用对象、内核缓冲区等信息。2、/sys/kernel/debug/binder/state stats transactions transaction_log failed_transaction_log通过他们可以读取到Binder驱动程序的运行情况,比如各个命令协议、返回协议的调用请求次数、日志记录,正在执行进程间通信的进程信息。3、/dev/binder 设备文件要使用binder执行进程间通信的进程,首先要open此文件4、 debugfs内核启动时,会将debugfs挂载到 /sys/kernel/debug}2、重要的内核数据结构{1、 binder_proc用来描述一个正在使用binder进程间通信机制的进程。每当进程open("/dev/binder")时就创建此结构,并将其添加到内核全局hash binder_procs表中。其存储有:进程控制块指针\PID\优先级.threads红黑树,用于存储 与此proc对应的进程中的所有binder线程的 binder_thread 管理体以线程PID为关键字.nodes红黑树,用于存储 与此proc对应的进程中的所有binder线程的 binder_node 管理体.default_priority进程默认优先级.ready_threads将当前进程的空闲线程计数binder_thread.transaction_stackbinder线程的事物堆栈,非空表示此线程正在等待其他线程完成另外一个事物.todo待处理任务链表,非空表示该线程有未处理的工作项当 transaction_stack 和 todo 都为空时,才可以去处理其所属进程的todo队列中的待处理工作项否则,只能去处理线程自身的事物、或者工作项binder_node即为 binder实体对象2、 vm_area_struct 进程用户地址空间,一段连续的虚拟地址空间表示进程要映射的用户地址空间,只读3、 vm_struct 进程内核地址空间,一段连续的虚拟地址空间范围:3G+896M+4M ~ 8M物理空间前896M映射到 3G~3G+896M内核地址安全保护区:3G+896M ~ 3G+896+8 ,所有指向此区域的指针都是非法的等价于:vm_area_struct.area4、 binder为进程分配的内核缓冲区的管理binder驱动程序为用户进程分配的内存缓冲区,即为一系列的物理页面,他们被映射到进程的内核空间和用户空间,分别供内核与用户空间使用binder_proc->free_buffers 红黑树:用于存储内核缓冲区管理体 binder_buffer 的指针其内容、及其管理的数据,存储在 内核缓冲区中。使用方式:当binder需要将一块数据传递给用户进程时,先通过内核空间虚拟地址访问缓冲区,将其保存到缓冲区然后,将内核缓冲区的用户空间地址告诉进程即可。优点:不需要将数据从内核空间拷贝到用户空间,从而提高数据传输效率。5、binder缓冲区中的物理页面何时被分配、释放binder_mmap 首次分配4K当一个进程使用命令协议 BC_TRANSATION BC_REPLY 向另一个进程传递数据时,首先Binder驱动程序要将数据copy到内核空间,然后在目标进程的内存池中分配一小块缓存来保存数据调用流程为:binder_transaction->binder_alloc_buf 根据情况做下一次分配当一个进程处理完成Binder驱动程序 给它发送的返回协议 BC_TRANSATION BC_REPLY 之后,会使用命令协议 BC_FREE_BUFFER 来通知binder驱动程序释放相应的内存缓冲区,接口调用 binder_free_bufbinder_procbinder线程池:每个使用binder通信机制的进程,都有一个,用来处理通信请求线程池的每个线程:其包含一个 IPCThreadState 对象handle_entry,对应一个binder代理对象.binder指向一个binder代理对象.refs一个弱引用计数}3、内核代码组织结构{//drivers/staging/android/binder.h//drivers/staging/android/binder.c static const struct file_operations binder_fops = {.owner = THIS_MODULE,.poll = binder_poll,.unlocked_ioctl = binder_ioctl,.mmap = binder_mmap,.open = binder_open,.flush = binder_flush,.release = binder_release,};static struct miscdevice binder_miscdev = {.minor = MISC_DYNAMIC_MINOR,.name = "binder",.fops = &binder_fops};}4、binder设备驱动程序初始化:binder_init//1、创建binder工作队列//2、创建如下binder相关的debugfs节点///sys/kernel/debug/binder/proc state stats transactions transaction_log failed_transaction_log//3、注册 dev/binder 设备文件节点,对此设备文件操作的方法在 binder_fops 中//drivers/staging/android/binder.c __init binder_init(void){//1int ret;binder_deferred_workqueue = create_singlethread_workqueue("binder");if (!binder_deferred_workqueue)return -ENOMEM;//2binder_debugfs_dir_entry_root = debugfs_create_dir("binder", NULL);if (binder_debugfs_dir_entry_root)binder_debugfs_dir_entry_proc = debugfs_create_dir("proc", binder_debugfs_dir_entry_root);//3、注册 dev/binderret = misc_register(&binder_miscdev);if (binder_debugfs_dir_entry_root) {debugfs_create_file("state",S_IRUGO,binder_debugfs_dir_entry_root,NULL,&binder_state_fops);debugfs_create_file("stats",S_IRUGO,binder_debugfs_dir_entry_root,NULL,&binder_stats_fops);debugfs_create_file("transactions",S_IRUGO,binder_debugfs_dir_entry_root,NULL,&binder_transactions_fops);debugfs_create_file("transaction_log",S_IRUGO,binder_debugfs_dir_entry_root,&binder_transaction_log,&binder_transaction_log_fops);debugfs_create_file("failed_transaction_log",S_IRUGO,binder_debugfs_dir_entry_root,&binder_transaction_log_failed,&binder_transaction_log_fops);}return ret;}5、/dev/binder设备节点的打开:binder_open//1、创建 binder_proc,并对其初始化。//proc->tsk = current;//proc->default_priority = task_nice(current);//proc->pid = current->group_leader->pid;//2、将 binder_proc 挂载到全局hash binder_procs表中//hlist_add_head(&proc->proc_node, &binder_procs);//3、将 binder_proc* 指针存储到 打开文件对象中//filp->private_data = proc//此后open("/dev/binder")的进程可以通过open调用返回给他的文件描述符//来获取此 binder_proc//4、创建节点文件 /sys/kernel/debug/binder/proc/进程pid//binder_open{//1、创建 binder_proc,并对其初始化struct binder_proc *proc;binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n", current->group_leader->pid, current->pid);proc = kzalloc(sizeof(*proc), GFP_KERNEL);if (proc == NULL)return -ENOMEM;get_task_struct(current);proc->tsk = current;INIT_LIST_HEAD(&proc->todo);init_waitqueue_head(&proc->wait);proc->default_priority = task_nice(current);binder_lock(__func__);binder_stats_created(BINDER_STAT_PROC);hlist_add_head(&proc->proc_node, &binder_procs);proc->pid = current->group_leader->pid;INIT_LIST_HEAD(&proc->delivered_death);//3、将 binder_proc* 指针存储到 打开文件对象中//filp->private_data = procfilp->private_data = proc;binder_unlock(__func__);//4、创建节点文件 /sys/kernel/debug/binder/proc/进程pidif (binder_debugfs_dir_entry_proc) {char strbuf[11];snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);}return 0;}6、将/dev/binder映射到进程空间binder_mmap//主要目的在于为进程分配内核缓冲区,用于传输进程间通信数据//1)、binder客户端进程所用到的内存小结//用户虚拟地址空间:vma[vma->vm_start,vma->vm_end],//由用户进程指定//内核虚拟地址空间:proc->buffer[proc->buffer,proc->buffer + proc->buffer_size]//get_vm_area 在进程内核空间中分配,映射过程使用//物理页数组:proc->pages[],数组元素个数 count = (vma->vm_end - vma->vm_start) / PAGE_SIZE//由kzalloc(sizeof(proc->pages[0]) * count, GFP_KERNEL);分配//数组每个成员对应一个物理页面,起初 binder_mmap 会调用 binder_update_page_range 分配第一个,并将//vma[index=0],proc->buffer[index=0]对应的虚拟页面,映射到 proc->pages[index=0]所代表的的物理页面中//2)、映射关系////整个执行过程//1、获取 binder_proc *proc = filp->private_data;//2、如果 [vma->vm_start,vma->vm_end]超过4M,则将其截断成4M//3、检查进程要映射的用户地址空间是否可写,确保只读//4、确保进程要映射的地址空间 不能拷贝、不可写//5、确保进程未重复调用mmap proc->buffer 非空//6、在进程的内核地址空间中,分配一段 [vma->vm_start,vma->vm_end] 空间//proc->(buffer,buffer_size)保存 area 起始地址、size值保存在 //proc->user_buffer_offset 保存用户空间起始地址 与 为此进程分配的内核缓冲区地址的差值//7、为用户虚拟地址空间 vma 创建大小为 (vma->vm_end - vma->vm_start) / PAGE_SIZE 的物理页面结构体数组//vma 中每一页用户虚拟地址空间,都在此对应一个物理页面。//proc->pages 物理页面数组的首地址//proc->buffer_size 存储管理的size//8、关联 binder_vm_ops\proc 到 vma//9、为 [proc->buffer,proc->buffer+4K]内核虚拟地址空间 area 分配一个物理页面//并将对应的index所表示的 用户虚拟空间中的第index个页面、内核虚拟空间中第index个页面,映射到 此物理页面//10、分配成功后,使用 binder_buffer*buffer 来管理上述缓冲区//由 binder_buffer 管理的内核缓冲区分为两部分://元数据块:存储 binder_buffer//有效数据存储区:binder_buffer.data[]区域//如何计算有效大小//所有的 binder_buffer 的地址保存在 proc->free_buffers 红黑树上binder_mmap(struct file *filp, struct vm_area_struct *vma){int ret;struct vm_struct *area;//1、获取 binder_proc struct binder_proc *proc = filp->private_data;const char *failure_string;struct binder_buffer *buffer;//2、如果 [vma->vm_start,vma->vm_end]超过4M,则将其截断成4Mif ((vma->vm_end - vma->vm_start) > SZ_4M)vma->vm_end = vma->vm_start + SZ_4M;binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n", proc->pid, vma->vm_start, vma->vm_end, (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags, (unsigned long)pgprot_val(vma->vm_page_prot));//3、检查进程要映射的用户地址空间是否可写,确保只读if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {ret = -EPERM;failure_string = "bad vm_flags";goto err_bad_arg;}//4、确保进程要映射的地址空间 不能拷贝、不可写vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;//5、确保进程未重复调用mmapmutex_lock(&binder_mmap_lock);if (proc->buffer) {ret = -EBUSY;failure_string = "already mapped";goto err_already_mapped;}//6、在进程的内核地址空间中,分配一段 [vma->vm_start,vma->vm_end] 空间//proc->(buffer,buffer_size)保存 area 起始地址、size值保存在 //proc->user_buffer_offset 保存用户空间起始地址 与 为此进程分配的内核缓冲区地址的差值//area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);if (area == NULL) {ret = -ENOMEM;failure_string = "get_vm_area";goto err_get_vm_area_failed;}proc->buffer = area->addr;proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;mutex_unlock(&binder_mmap_lock);//ifdef CONFIG_CPU_CACHE_VIPTif (cache_is_vipt_aliasing()) {while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) {printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p bad alignment\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);vma->vm_start += PAGE_SIZE;}}//endif//7、为用户虚拟地址空间 vma 创建大小为 (vma->vm_end - vma->vm_start) / PAGE_SIZE 的物理页面结构体数组//vma 中每一页用户虚拟地址空间,都在此对应一个物理页面。//proc->pages 物理页面数组的首地址//proc->buffer_size 存储管理的sizeproc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);if (proc->pages == NULL) {ret = -ENOMEM;failure_string = "alloc page array";goto err_alloc_pages_failed;}proc->buffer_size = vma->vm_end - vma->vm_start;//8、关联 binder_vm_ops\proc 到 vmavma->vm_ops = &binder_vm_ops;vma->vm_private_data = proc;//9、为 [proc->buffer,proc->buffer+4K]内核虚拟地址空间 area 分配一个物理页面if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {//binder_update_page_range/*struct binder_proc *proc, int allocate,void *start, void *end,//要操作的内核地址空间struct vm_area_struct *vma//要映射的用户地址空间 */void *page_addr;unsigned long user_page_addr;struct vm_struct tmp_area;struct page **page;struct mm_struct *mm;binder_debug(BINDER_DEBUG_BUFFER_ALLOC, "binder: %d: %s pages %p-%p\n", proc->pid, allocate ? "allocate" : "free", start, end);if (end <= start)return 0;trace_binder_update_page_range(proc, allocate, start, end);if (vma)mm = NULL;elsemm = get_task_mm(proc->tsk);if (mm) {down_write(&mm->mmap_sem);vma = proc->vma;if (vma && mm != proc->vma_vm_mm) {pr_err("binder: %d: vma mm and task mm mismatch\n",proc->pid);vma = NULL;}}//判断是否为释放物理内存if (allocate == 0)goto free_range;if (vma == NULL) {printk(KERN_ERR "binder: %d: binder_alloc_buf failed to "   "map pages in userspace, no vma\n", proc->pid);goto err_no_vma;}//依次为内核虚拟地址空间[proc->buffer, proc->buffer + PAGE_SIZE]中的指向的物理页面,分配一个物理页面for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {int ret;struct page **page_array_ptr;//获取步骤7中为 vma[]分配的物理页面数组中的相应index/*此处为0,只分配一次*/的物理页指针的存储地址page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];BUG_ON(*page);//为此index分配一个物理页,将这个地址存储到 proc->pages[index]槽内*page = alloc_page(GFP_KERNEL | __GFP_ZERO);if (*page == NULL) {printk(KERN_ERR "binder: %d: binder_alloc_buf failed "   "for page at %p\n", proc->pid, page_addr);goto err_alloc_page_failed;}//将index=0对应的此物理页面 映射到 [proc->buffer,proc->buffer+4K+4K]内核虚拟地址空间tmp_area.addr = page_addr;tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;page_array_ptr = page;ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);{//mm/vmalloc.c}if (ret) {printk(KERN_ERR "binder: %d: binder_alloc_buf failed "   "to map page at %p in kernel\n",   proc->pid, page_addr);goto err_map_kernel_failed;}//将index=0对应的 为vma[]分配的物理页面数组中对应的页,映射到 用户空间user_page_addr = (uintptr_t)page_addr + proc->user_buffer_offset;ret = vm_insert_page(vma, user_page_addr, page[0]);if (ret) {printk(KERN_ERR "binder: %d: binder_alloc_buf failed "   "to map page at %lx in userspace\n",   proc->pid, user_page_addr);goto err_vm_insert_page_failed;}/* vm_insert_page does not seem to increment the refcount */}return 0;free_range:for (page_addr = end - PAGE_SIZE; page_addr >= start; page_addr -= PAGE_SIZE) {page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];if (vma)zap_page_range(vma, (uintptr_t)page_addr +proc->user_buffer_offset, PAGE_SIZE, NULL);err_vm_insert_page_failed:unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);err_map_kernel_failed:__free_page(*page);*page = NULL;err_alloc_page_failed:;}err_no_vma:if (mm) {up_write(&mm->mmap_sem);mmput(mm);}return -ENOMEM;}{ret = -ENOMEM;failure_string = "alloc small buf";goto err_alloc_small_buf_failed;}//10、分配成功后,使用 binder_buffer*buffer 来描述上述物理页面buffer = proc->buffer;INIT_LIST_HEAD(&proc->buffers);list_add(&buffer->entry, &proc->buffers);buffer->free = 1;binder_insert_free_buffer(proc, buffer);{/*struct binder_proc *proc,struct binder_buffer *new_buffer */struct rb_node **p = &proc->free_buffers.rb_node;struct rb_node *parent = NULL;struct binder_buffer *buffer;size_t buffer_size;size_t new_buffer_size;BUG_ON(!new_buffer->free);new_buffer_size = binder_buffer_size(proc, new_buffer);binder_debug(BINDER_DEBUG_BUFFER_ALLOC, "binder: %d: add free buffer, size %zd, " "at %p\n", proc->pid, new_buffer_size, new_buffer);while (*p) {parent = *p;buffer = rb_entry(parent, struct binder_buffer, rb_node);BUG_ON(!buffer->free);buffer_size = binder_buffer_size(proc, buffer);if (new_buffer_size < buffer_size)p = &parent->rb_left;elsep = &parent->rb_right;}rb_link_node(&new_buffer->rb_node, parent, p);rb_insert_color(&new_buffer->rb_node, &proc->free_buffers);}proc->free_async_space = proc->buffer_size / 2;barrier();proc->files = get_files_struct(proc->tsk);proc->vma = vma;proc->vma_vm_mm = vma->vm_mm;/*printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/return 0;}7、SMS进程的启动过程:service_manager.main{//1、init执行init.rc中配置的service/*service servicemanager /system/bin/servicemanagerclass coreuser system//uis =systemgroup system//gid =systemcritical//它是系统关键服务,系统运行过程中不能退出onrestart restart healthd//一旦SMA重启,则如下进程将要重启:healthd zygote media surfaceflinger drmonrestart restart zygote//onrestart restart mediaonrestart restart surfaceflingeronrestart restart drm */31、SM进程入口//-//framework/natives/cmds/service_manager/service_manager.cservice_manager.main{struct binder_state *bs;{int fd;//保存/dev/binder设备文件描述符void *mapped;//此设备文件映射到SMS进程空间后的起始地址、空间大小unsigned mapsize;}//1、SMS是一个特殊的组件,与其对应的binder本地对象是一个虚拟的对象,其地址[即句柄]为0//最终保存到全局变量 SM.svcmgr_handlevoid *svcmgr = BINDER_SERVICE_MANAGER;//2、打开 /dev/binder设备文件,将它映射到SMS进程空间bs = binder_open(128*1024);//3、将自己注册为Binder进程间通信的上下文管理器if (binder_become_context_manager(bs)) {ALOGE("cannot become context manager (%s)\n", strerror(errno));return -1;}//4、循环等待和处理client进程的通信请求svcmgr_handle = svcmgr;binder_loop(bs, svcmgr_handler);return 0;}//1.2、binder_open//-//framework/natives/cmds/service_manager/binder.c//1、打开设备文件 open("/dev/binder", O_RDWR);////2、调用mmap将设备文件dev/binder映射到SM进程地址空间//它会调用binder驱动程序对应的 binder_mmap 接口,为进程分配128K的内核缓冲区//映射完毕后将地址、size存储到 binder_state*bs 中//bs->mapped//用户空间虚拟地址//bs->mapsize//size//binder_open(unsigned mapsize){struct binder_state *bs;bs = malloc(sizeof(*bs));if (!bs) {errno = ENOMEM;return 0;}bs->fd = open("/dev/binder", O_RDWR);if (bs->fd < 0) {fprintf(stderr,"binder: cannot open device (%s)\n",strerror(errno));goto fail_open;}bs->mapsize = mapsize;//2、将设备文件dev/binder映射到SM进程虚拟地址空间//它会调用binder驱动程序对应的 binder_mmap 接口,为进程分配128K的内核缓冲区bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);if (bs->mapped == MAP_FAILED) {fprintf(stderr,"binder: cannot map device (%s)\n",strerror(errno));goto fail_map;}/* TODO: check version */return bs;fail_map:close(bs->fd);fail_open:free(bs);return 0;}//1.3、将SM进程注册到binder驱动程序,将来其作为Binder进程间通信机制的上下文管理者//-//framework/natives/cmds/service_manager/binder.c//0、最终目的://创建 binder_node,存储到全局静态变量 binder_context_mgr_node//如果 binder_context_mgr_node 在此非空,则说明此前有进程将自己注册到binder驱动程序//作为上下文管理者。binder驱动程序,不允许重复注册“上下文管理者”////存储 当前进程的有效用户ID到全局变量 binder_context_mgr_uid//如果其非空,且不等于当前进程的有效id[current->cred->euid],则说明之前已有其他进程注册//但是允许SM进程多次注册////1、根据SM进程传递下来的dev/binder对应的文件描述符,获取与SM进程对应的 binder_proc////2、调用 binder_get_thread 为SM进程的主线程创建一个 binder_thread 结构体//首先在 SM.proc->threads 红黑树中,以此线程的PID查找节点,若找到则直接返回它。//否则新创建 binder_thread,并添加到红黑树中,并且为 binder_thread 设定如下标志//thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;表示当此线程完成当前操作//之后应该马上返回到用户空间,不可以去处理进程间通信请求。////3、调用 binder_new_node 为SM进程创建 binder_node 对象,存储到 binder_context_mgr_node//参数://ptr指向Binder本地对象内部的一个弱引用计数对象//cookie指向binder本地对象//1)、以 ptr 为关键字在 proc->nodes 红黑树中查找对应的 binder实体对象//2)、如果未找到,则新建Binder实体对象,并初始化之,再添加到 proc->nodes 红黑树中////4、设定新创建的binder实体对象[binder_context_mgr_node] 引用计数相关成员//local_weak_refs++ | local_strong_refs++ ,从而避免binder驱动程序将其释放//has_strong_ref = has_weak_ref =1,表示binder驱动程序已经请求SM进程增加 SM组件的强、弱引用计数了//5、在IO_CTL返回到用户空间之前,将当前binder线程状态位 BINDER_LOOPER_STATE_NEED_RETURN 清除//以便于 当该线程下一次进入到binder驱动程序时,binder驱动程序可以将进程间通信请求分发给它去处理//binder_become_context_manager(bs){return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);{//drivers/staging/android/binder.c: binder_ioctl //struct file *filp, unsigned int cmd, unsigned long argint ret;//1、根据SM进程传递下来的dev/binder对应的文件描述符,获取与SM进程对应的 binder_proc--procstruct binder_proc *proc = filp->private_data;struct binder_thread *thread;unsigned int size = _IOC_SIZE(cmd);void __user *ubuf = (void __user *)arg;/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/trace_binder_ioctl(cmd, arg);ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);if (ret)goto err_unlocked;binder_lock(__func__);//2、为SM进程的主线程创建一个 binder_thread 结构体:binder_get_thread(proc)//首先在 SM.proc->threads 红黑树中,以此线程的PID查找节点,若找到则直接返回它。////否则新创建 binder_thread,并添加到红黑树中,并且为 binder_thread 设定如下标志//thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;表示当此线程完成当前操作//之后应该马上返回到用户空间,不可以去处理进程间通信请求。//thread = binder_get_thread(proc);if (thread == NULL) {ret = -ENOMEM;goto err;}switch (cmd) {case BINDER_SET_CONTEXT_MGR:if (binder_context_mgr_node != NULL) {printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");ret = -EBUSY;goto err;}ret = security_binder_set_context_mgr(proc->tsk);if (ret < 0)goto err;if (binder_context_mgr_uid != -1) {if (binder_context_mgr_uid != current->cred->euid) {printk(KERN_ERR "binder: BINDER_SET_"   "CONTEXT_MGR bad uid %d != %d\n",   current->cred->euid,   binder_context_mgr_uid);ret = -EPERM;goto err;}} elsebinder_context_mgr_uid = current->cred->euid;//3、调用 binder_new_node(proc, NULL, NULL) 为SM进程创建 binder_node 对象//参数://ptr指向Binder本地对象内部的一个弱引用计数对象//cookie指向binder本地对象//1)、以 ptr 为关键字在 proc->nodes 红黑树中查找对应的 binder实体对象//2)、如果未找到,则新建Binder实体对象,并初始化之,再添加到 proc->nodes 红黑树中binder_context_mgr_node = binder_new_node(proc, NULL, NULL);{/*struct binder_proc *proc,void __user *ptr,void __user *cookie */struct rb_node **p = &proc->nodes.rb_node;struct rb_node *parent = NULL;struct binder_node *node;//1、以 ptr 为关键字在 proc->nodes 红黑树中查找对应的 binder实体对象 while (*p) {parent = *p;node = rb_entry(parent, struct binder_node, rb_node);if (ptr < node->ptr)p = &(*p)->rb_left;else if (ptr > node->ptr)p = &(*p)->rb_right;elsereturn NULL;}//2、如果未找到,则新建Binder实体对象,并初始化之,再添加到 proc->nodes 红黑树中node = kzalloc(sizeof(*node), GFP_KERNEL);if (node == NULL)return NULL;binder_stats_created(BINDER_STAT_NODE);rb_link_node(&node->rb_node, parent, p);rb_insert_color(&node->rb_node, &proc->nodes);node->debug_id = ++binder_last_id;node->proc = proc;node->ptr = ptr;node->cookie = cookie;node->work.type = BINDER_WORK_NODE;INIT_LIST_HEAD(&node->work.entry);INIT_LIST_HEAD(&node->async_todo);binder_debug(BINDER_DEBUG_INTERNAL_REFS, "binder: %d:%d node %d u%p c%p created\n", proc->pid, current->pid, node->debug_id, node->ptr, node->cookie);return node;}if (binder_context_mgr_node == NULL) {ret = -ENOMEM;goto err;}//4、设定新创建的binder实体对象 引用计数相关成员//local_weak_refs++ | local_strong_refs++ ,从而避免binder驱动程序将其释放//has_strong_ref = has_weak_ref =1,表示binder驱动程序已经请求SM进程增加 SM组件的强、弱引用计数了binder_context_mgr_node->local_weak_refs++;binder_context_mgr_node->local_strong_refs++;binder_context_mgr_node->has_strong_ref = 1;binder_context_mgr_node->has_weak_ref = 1;break;}//5、在IO_CTL返回到用户空间之前,将当前binder线程状态位 BINDER_LOOPER_STATE_NEED_RETURN 清除//以便于 当该线程下一次进入到binder驱动程序时,binder驱动程序可以将进程间通信请求分发给它去处理//if (thread)thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;binder_unlock(__func__);wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);if (ret && ret != -ERESTARTSYS)printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);err_unlocked:trace_binder_ioctl_done(ret);return ret;}}//1.4、SM进程-主线程将自己注册成为binder线程,然后循环等待和处理client进程的通信请求//-//framework/natives/cmds/service_manager/service_manager.c//1、调用 binder_write(BC_ENTER_LOOPER) 注册成为binder线程//一个线程通过 BC_ENTER_LOOPER|BC_RELEASE_LOOPER 将自己注册成为binder线程//1)、创建输入、输出缓冲区 binder_write_read bwr//bwr.write_buffer = (unsigned) data;//输入缓冲区//bwr.read_buffer = 0;输出缓冲区,设定为0,使得当线程将自己注册到binder驱动程序//后,会立马返回,而不会再binder驱动程序中等待client进程的通信请求//2)、通过IOCTL(BINDER_WRITE_READ)将当前线程注册成binder线程//首先获取SM主线程对应的 thread 结构体//将从用户空间传递进来的 binder_write_read 结构体 拷贝到内核空间,保存到 bwr//调用 binder_thread_write,完成 BC_ENTER_LOOPER 协议处理后,返回到用户空间//主要目的是设定:thread->looper |= BINDER_LOOPER_STATE_ENTERED;表明该线程是binder线程////2、不断通过 BINDER_WRITE_READ IOCTL命令来检查binder驱动程序是否有新的进程通信请求//如有,则交给 binder_parse 来处理//否则,当前线程会在binder驱动程序中睡眠等待,直到有新的进程间通信请求到来为止//1)、由于for循环中每次传递给ioctl的 bwr.write_size 输入缓冲区长度为0,因而ioctl进入 binder_ioctl 之后//只会调用 binder_thread_read 来检查是否新的进程通信请求//2)、 binder_thread_read 的执行过程binder_loop(bs, svcmgr_handler);{//struct binder_state *bs, binder_handler funcint res;struct binder_write_read bwr;unsigned readbuf[32];bwr.write_size = 0;bwr.write_consumed = 0;bwr.write_buffer = 0;//1、调用 binder_write(BC_ENTER_LOOPER) 注册成为binder线程//一个线程通过 BC_ENTER_LOOPER|BC_RELEASE_LOOPER 将自己注册成为binder线程//1)、创建输入、输出缓冲区 binder_write_read bwr//bwr.write_buffer = (unsigned) data;//输入缓冲区//bwr.read_buffer = 0;输出缓冲区,设定为0,使得当线程将自己注册到binder驱动程序//后,会立马返回,而不会再binder驱动程序中等待client进程的通信请求//2)、通过IOCTL(BINDER_WRITE_READ)将当前线程注册成binder线程//首先获取SM主线程对应的 thread 结构体//将从用户空间传递进来的 binder_write_read 结构体 拷贝到内核空间,保存到 bwr//调用 binder_thread_write,完成 BC_ENTER_LOOPER 协议处理后,返回到用户空间//主要目的是设定:thread->looper |= BINDER_LOOPER_STATE_ENTERED;表明该线程是binder线程readbuf[0] = BC_ENTER_LOOPER;binder_write(bs, readbuf, sizeof(unsigned));{//struct binder_state *bs, void *data, unsigned lenstruct binder_write_read bwr;int res;bwr.write_size = len;bwr.write_consumed = 0;bwr.write_buffer = (unsigned) data;bwr.read_size = 0;bwr.read_consumed = 0;bwr.read_buffer = 0;//res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);{//binder_ioctlstruct binder_thread *thread = binder_get_thread(proc);case BINDER_WRITE_READ: {struct binder_write_read bwr;if (size != sizeof(struct binder_write_read)) {ret = -EINVAL;goto err;}if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto err;}binder_debug(BINDER_DEBUG_READ_WRITE, "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n", proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);//if (bwr.write_size > 0) {ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);trace_binder_write_done(ret);if (ret < 0) {bwr.read_consumed = 0;if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto err;}}if (bwr.read_size > 0) {ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);trace_binder_read_done(ret);if (!list_empty(&proc->todo))wake_up_interruptible(&proc->wait);if (ret < 0) {if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto err;}}binder_debug(BINDER_DEBUG_READ_WRITE, "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n", proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto err;}break;}}if (res < 0) {fprintf(stderr,"binder_write: ioctl failed (%s)\n",strerror(errno));}return res;}//2、不断通过 BINDER_WRITE_READ IOCTL命令来检查binder驱动程序是否有新的进程通信请求//如有,则交给 binder_parse 来处理//否则,当前线程会在binder驱动程序中睡眠等待,直到有新的进程间通信请求到来为止//1)、由于for循环中每次传递给ioctl的 bwr.write_size 输入缓冲区长度为0,因而ioctl进入 binder_ioctl 之后//只会调用 binder_thread_read 来检查是否新的进程通信请求//2)、 binder_thread_read 的执行过程for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (unsigned) readbuf;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);{//binder_ioctlstruct binder_thread *thread = binder_get_thread(proc);case BINDER_WRITE_READ: {struct binder_write_read bwr;if (size != sizeof(struct binder_write_read)) {ret = -EINVAL;goto err;}if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto err;}//if (bwr.write_size > 0) {//NO}if (bwr.read_size > 0) {//YESret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);trace_binder_read_done(ret);if (!list_empty(&proc->todo))wake_up_interruptible(&proc->wait);if (ret < 0) {if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto err;}}if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto err;}break;}}if (res < 0) {ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));break;}res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);if (res == 0) {ALOGE("binder_loop: unexpected reply?!\n");break;}if (res < 0) {ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));break;}}}//1.4.2、 binder_thread_read、//-//drivers/staging/android/binder.c////1、检查:当前binder线程.事物堆栈是否为空;线程的todo队列是否为空//如果 thread->transaction_stack == NULL && thread->todo 链表无成员//则设定 wait_for_proc_work =1;//否则 要优先处理自己的事物、或者工作项////2、设置该线程状态为 BINDER_LOOPER_STATE_WAITING,表示处于空闲状态////3、若 wait_for_proc_work ==1,则将当前进程的空闲线程计数 proc->ready_threads////4、检查当前线程要检查它所属的进程->todo队列中是否有未处理的工作项//设定当前线程的优先级= 其所属进程的默认优先级 //如果当前线程以非阻塞方式打开的binder设备文件,则线程发现其proc->todo链表空时,不能进入睡眠//否则,调用 wait_event_interruptible_exclusive 进入睡眠,用于等待直到其proc有新的未处理工//作项时为止////5、如果 wait_for_proc_work==0,则当前线程需要检查 自己的todo队列中是否有未处理的工作项////6、如果binder驱动程序发现 当前线程有新的工作任务需要处理时//则清除线程状态 BINDER_LOOPER_STATE_WAITING 标记//根据需要,减少空闲线程计数////7、如果当前线程不睡眠,则在此返回用户空间////8、否则线程睡眠在 其所属进程的一个等待队列wait中,或者自己的等待队列中//当线程被唤醒后,调用如下while循环来处理器工作项////9、检查 是否需要请求当前线程所属的进程proc 新增加一个 Binder线程来处理进程间通信请求//须满足如下四个条件://空闲线程数 proc->ready_threads == 0//binder驱动程序当前,没有正在请求proc增加新的binder进程 proc->requested_threads ==0//binder驱动程序请求proc增加binder线程数 proc->requested_threads_started 未越界//当前线程是一个已经注册到binder驱动程序中的biner线程//若满足,则将一个 返回协议代码 BR_SPAWN_LOOPER 写入到用户空间缓冲区中,以便于proc可以创建//一个新的线程并加入到binder线程池//binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);{/*struct binder_proc *proc,struct binder_thread *thread,void  __user *buffer, int size,signed long *consumed, int non_block */void __user *ptr = buffer + *consumed;void __user *end = buffer + size;int ret = 0;int wait_for_proc_work;if (*consumed == 0) {if (put_user(BR_NOOP, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);}retry://1、检查:当前binder线程.事物堆栈是否为空;线程的todo队列是否为空//如果 thread->transaction_stack == NULL && thread->todo 链表无成员//则设定 wait_for_proc_work =1;//否则 要优先处理自己的事物、或者工作项wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);if (thread->return_error != BR_OK && ptr < end) {if (thread->return_error2 != BR_OK) {if (put_user(thread->return_error2, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);binder_stat_br(proc, thread, thread->return_error2);if (ptr == end)goto done;thread->return_error2 = BR_OK;}if (put_user(thread->return_error, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);binder_stat_br(proc, thread, thread->return_error);thread->return_error = BR_OK;goto done;}//2、设置该线程状态为 BINDER_LOOPER_STATE_WAITING,表示处于空闲状态thread->looper |= BINDER_LOOPER_STATE_WAITING;//3、若 wait_for_proc_work ==1,则将当前进程的空闲线程计数 proc->ready_threadsif (wait_for_proc_work)proc->ready_threads++;//4、检查当前线程要检查它所属的进程->todo队列中是否有未处理的工作项binder_unlock(__func__);trace_binder_wait_for_work(wait_for_proc_work,   !!thread->transaction_stack,   !list_empty(&thread->todo));if (wait_for_proc_work) {if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |BINDER_LOOPER_STATE_ENTERED))) {binder_user_error("binder: %d:%d ERROR: Thread waiting ""for process work before calling BC_REGISTER_""LOOPER or BC_ENTER_LOOPER (state %x)\n",proc->pid, thread->pid, thread->looper);wait_event_interruptible(binder_user_error_wait,binder_stop_on_user_error < 2);}//设定当前线程的优先级= 其所属进程的默认优先级 binder_set_nice(proc->default_priority);//如果当前线程以非阻塞方式打开的binder设备文件,则线程发现其proc->todo链表空时,不能进入睡眠//否则,调用 wait_event_interruptible_exclusive 进入睡眠,用于等待直到其proc有新的未处理工//作项时为止if (non_block) {if (!binder_has_proc_work(proc, thread))ret = -EAGAIN;} elseret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));} //5、如果 wait_for_proc_work==0,则当前线程需要检查 自己的todo队列中是否有未处理的工作项else {if (non_block) {if (!binder_has_thread_work(thread))ret = -EAGAIN;} elseret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));}binder_lock(__func__);//6、如果binder驱动程序发现 当前线程有新的工作任务需要处理时//则清除线程状态 BINDER_LOOPER_STATE_WAITING 标记//根据需要,减少空闲线程计数if (wait_for_proc_work)proc->ready_threads--;thread->looper &= ~BINDER_LOOPER_STATE_WAITING;//7、如果当前线程不睡眠,则在此返回用户空间if (ret)return ret;//8、否则线程睡眠在 其所属进程的一个等待队列wait中,或者自己的等待队列中//当线程被唤醒后,调用如下while循环来处理器工作项while (1) {uint32_t cmd;struct binder_transaction_data tr;struct binder_work *w;struct binder_transaction *t = NULL;if (!list_empty(&thread->todo))w = list_first_entry(&thread->todo, struct binder_work, entry);else if (!list_empty(&proc->todo) && wait_for_proc_work)w = list_first_entry(&proc->todo, struct binder_work, entry);else {if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */goto retry;break;}if (end - ptr < sizeof(tr) + 4)break;switch (w->type) {case BINDER_WORK_TRANSACTION: {t = container_of(w, struct binder_transaction, work);} break;case BINDER_WORK_TRANSACTION_COMPLETE: {cmd = BR_TRANSACTION_COMPLETE;if (put_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);binder_stat_br(proc, thread, cmd);binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, "binder: %d:%d BR_TRANSACTION_COMPLETE\n", proc->pid, thread->pid);list_del(&w->entry);kfree(w);binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);} break;case BINDER_WORK_NODE: {struct binder_node *node = container_of(w, struct binder_node, work);uint32_t cmd = BR_NOOP;const char *cmd_name;int strong = node->internal_strong_refs || node->local_strong_refs;int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong;if (weak && !node->has_weak_ref) {cmd = BR_INCREFS;cmd_name = "BR_INCREFS";node->has_weak_ref = 1;node->pending_weak_ref = 1;node->local_weak_refs++;} else if (strong && !node->has_strong_ref) {cmd = BR_ACQUIRE;cmd_name = "BR_ACQUIRE";node->has_strong_ref = 1;node->pending_strong_ref = 1;node->local_strong_refs++;} else if (!strong && node->has_strong_ref) {cmd = BR_RELEASE;cmd_name = "BR_RELEASE";node->has_strong_ref = 0;} else if (!weak && node->has_weak_ref) {cmd = BR_DECREFS;cmd_name = "BR_DECREFS";node->has_weak_ref = 0;}if (cmd != BR_NOOP) {if (put_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);if (put_user(node->ptr, (void * __user *)ptr))return -EFAULT;ptr += sizeof(void *);if (put_user(node->cookie, (void * __user *)ptr))return -EFAULT;ptr += sizeof(void *);binder_stat_br(proc, thread, cmd);binder_debug(BINDER_DEBUG_USER_REFS, "binder: %d:%d %s %d u%p c%p\n", proc->pid, thread->pid, cmd_name, node->debug_id, node->ptr, node->cookie);} else {list_del_init(&w->entry);if (!weak && !strong) {binder_debug(BINDER_DEBUG_INTERNAL_REFS, "binder: %d:%d node %d u%p c%p deleted\n", proc->pid, thread->pid, node->debug_id, node->ptr, node->cookie);rb_erase(&node->rb_node, &proc->nodes);kfree(node);binder_stats_deleted(BINDER_STAT_NODE);} else {binder_debug(BINDER_DEBUG_INTERNAL_REFS, "binder: %d:%d node %d u%p c%p state unchanged\n", proc->pid, thread->pid, node->debug_id, node->ptr, node->cookie);}}} break;case BINDER_WORK_DEAD_BINDER:case BINDER_WORK_DEAD_BINDER_AND_CLEAR:case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {struct binder_ref_death *death;uint32_t cmd;death = container_of(w, struct binder_ref_death, work);if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;elsecmd = BR_DEAD_BINDER;if (put_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);if (put_user(death->cookie, (void * __user *)ptr))return -EFAULT;ptr += sizeof(void *);binder_stat_br(proc, thread, cmd);binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION, "binder: %d:%d %s %p\n",  proc->pid, thread->pid,  cmd == BR_DEAD_BINDER ?  "BR_DEAD_BINDER" :  "BR_CLEAR_DEATH_NOTIFICATION_DONE",  death->cookie);if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {list_del(&w->entry);kfree(death);binder_stats_deleted(BINDER_STAT_DEATH);} elselist_move(&w->entry, &proc->delivered_death);if (cmd == BR_DEAD_BINDER)goto done; /* DEAD_BINDER notifications can cause transactions */} break;}if (!t)continue;BUG_ON(t->buffer == NULL);if (t->buffer->target_node) {struct binder_node *target_node = t->buffer->target_node;tr.target.ptr = target_node->ptr;tr.cookie =  target_node->cookie;t->saved_priority = task_nice(current);if (t->priority < target_node->min_priority &&!(t->flags & TF_ONE_WAY))binder_set_nice(t->priority);else if (!(t->flags & TF_ONE_WAY) || t->saved_priority > target_node->min_priority)binder_set_nice(target_node->min_priority);cmd = BR_TRANSACTION;} else {tr.target.ptr = NULL;tr.cookie = NULL;cmd = BR_REPLY;}tr.code = t->code;tr.flags = t->flags;tr.sender_euid = t->sender_euid;if (t->from) {struct task_struct *sender = t->from->proc->tsk;tr.sender_pid = task_tgid_nr_ns(sender,current->nsproxy->pid_ns);} else {tr.sender_pid = 0;}tr.data_size = t->buffer->data_size;tr.offsets_size = t->buffer->offsets_size;tr.data.ptr.buffer = (void *)t->buffer->data +proc->user_buffer_offset;tr.data.ptr.offsets = tr.data.ptr.buffer +ALIGN(t->buffer->data_size,sizeof(void *));if (put_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);if (copy_to_user(ptr, &tr, sizeof(tr)))return -EFAULT;ptr += sizeof(tr);trace_binder_transaction_received(t);binder_stat_br(proc, thread, cmd);binder_debug(BINDER_DEBUG_TRANSACTION, "binder: %d:%d %s %d %d:%d, cmd %d" "size %zd-%zd ptr %p-%p\n", proc->pid, thread->pid, (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" : "BR_REPLY", t->debug_id, t->from ? t->from->proc->pid : 0, t->from ? t->from->pid : 0, cmd, t->buffer->data_size, t->buffer->offsets_size, tr.data.ptr.buffer, tr.data.ptr.offsets);list_del(&t->work.entry);t->buffer->allow_user_free = 1;if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {t->to_parent = thread->transaction_stack;t->to_thread = thread;thread->transaction_stack = t;} else {t->buffer->transaction = NULL;kfree(t);binder_stats_deleted(BINDER_STAT_TRANSACTION);}break;}done://9、检查 是否需要请求当前线程所属的进程proc 新增加一个 Binder线程来处理进程间通信请求//须满足如下四个条件://空闲线程数 proc->ready_threads == 0//binder驱动程序当前,没有正在请求proc增加新的binder进程 proc->requested_threads ==0//binder驱动程序请求proc增加binder线程数 proc->requested_threads_started 未越界//当前线程是一个已经注册到binder驱动程序中的biner线程//若满足,则将一个 返回协议代码 BR_SPAWN_LOOPER 写入到用户空间缓冲区中,以便于proc可以创建//一个新的线程并加入到binder线程池//*consumed = ptr - buffer;if (proc->requested_threads + proc->ready_threads == 0 &&proc->requested_threads_started < proc->max_threads &&(thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */ /*spawn a new thread if we leave this out */) {proc->requested_threads++;binder_debug(BINDER_DEBUG_THREADS, "binder: %d:%d BR_SPAWN_LOOPER\n",proc->pid, thread->pid);if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))return -EFAULT;binder_stat_br(proc, thread, BR_SPAWN_LOOPER);}return 0;}}}二、C++层-Binder 模板类-BnInterface\BpInterface 的定义{1、Binder[本地\代理]对象本地对象:使用模板类 BnInterface 来描述,通常为Service组件对应于驱动程序中的Binder实体对象代理对象:使用 BpInterface 描述,对应于client组件对应于驱动程序中的Binder引用对象2、模板类的实现{1、继承关系BnInterface->BBinder->IBinder->RefBaseBpInterface->BpRefBase->RefBase2、Binder本地类模板【 BnInterface 】定义//---------------------------------base/include/binder/IInterface.h//1、 接口-INTERFACE:用户自定义的service组件接口//BnInterface 需要实现它////2、 基类-BBinder: 提供抽象的进程通信接口////3、 由于继承了 RefBase,因此 binder本地对象,采用计数来维护生命周期//// ----------------------------------------------------------------------template<typename INTERFACE>class BnInterface : public INTERFACE, public BBinder{public:virtual sp<IInterface>      queryLocalInterface(const String16& _descriptor);virtual const String16&     getInterfaceDescriptor() const;protected:virtual IBinder*            onAsBinder();};//--------------------------------- Binder.h//BBinder 类为本地对象提供了 抽象的进程间通信接口//transact: //当一个Binder代理对象通过驱动程序向一个Binder本地对象发出通信请求时,Binder驱动程序//会调用该Binder本地对象.transact 接口//onTransact://由Binder本地对象实现,即 BBinder 的子类【通常为service组件类】去实现//负责分发与业务相关的进程间通信请求//// ----------------------------------------------------------------------class BBinder : public IBinder{public:BBinder();virtual const String16& getInterfaceDescriptor() const;virtual bool        isBinderAlive() const;virtual status_t    pingBinder();virtual status_t    dump(int fd, const Vector<String16>& args);virtual status_t    transact(   uint32_t code,const Parcel& data,Parcel* reply,uint32_t flags = 0);virtual status_t    linkToDeath(const sp<DeathRecipient>& recipient,void* cookie = NULL,uint32_t flags = 0);virtual status_t    unlinkToDeath(  const wp<DeathRecipient>& recipient,void* cookie = NULL,uint32_t flags = 0,wp<DeathRecipient>* outRecipient = NULL);virtual void        attachObject(   const void* objectID,void* object,void* cleanupCookie,object_cleanup_func func);virtual void*       findObject(const void* objectID) const;virtual void        detachObject(const void* objectID);virtual BBinder*    localBinder();protected:virtual             ~BBinder();virtual status_t    onTransact( uint32_t code,const Parcel& data,Parcel* reply,uint32_t flags = 0);private:BBinder(const BBinder& o);BBinder&    operator=(const BBinder& o);class Extras;Extras*     mExtras;void*       mReserved0;}3、Binder代理类模板【 BpInterface 】定义//1、 BpInterface//template<typename INTERFACE>class BpInterface : public INTERFACE, public BpRefBase{public:BpInterface(const sp<IBinder>& remote);protected:virtual IBinder*            onAsBinder();};//2、 BpRefBase////---------------------------------Binder.h//BpRefBase 拥有重要成员 //mRemote,指向一个 BpBinder 对象,可通过 BpRefBase 成员函数 remote 获取//class BpRefBase : public virtual RefBase{protected:BpRefBase(const sp<IBinder>& o);virtual                 ~BpRefBase();virtual void            onFirstRef();virtual void            onLastStrongRef(const void* id);virtual bool            onIncStrongAttempted(uint32_t flags, const void* id);inline  IBinder*        remote()                { return mRemote; }inline  IBinder*        remote() const          { return mRemote; }private:BpRefBase(const BpRefBase& o);BpRefBase&              operator=(const BpRefBase& o);IBinder* const          mRemote;RefBase::weakref_type*  mRefs;volatile int32_t        mState;};//3、 BpBinder:此类实现了 BpRefBase类的进程间通信接口////---------------------------------BpBinder.h//BpBinder.mHandle, 表示client组件的句柄值。//每一个client组件都会在 Binder驱动程序中,对应一个Binder【引用对象】//每一个Binder引用对象,都有一个【句柄值】.//client组件通过这个【句柄值】来和 【引用对象】之间建立对应关系////BpBinder.transact//用来向运行在server进程中的service组件发送进程通信请求//首先它将 mHandle\通信数据发送给 binder驱动程序;//驱动程序通过 mHandle 的值找到对应的Binder引用对象,进而找到对应的binder实体对象//最后将通信数据发送给对应的 service组件//class BpBinder : public IBinder{public:BpBinder(int32_t handle);inline  int32_t     handle() const { return mHandle; }virtual const String16&    getInterfaceDescriptor() const;virtual bool        isBinderAlive() const;virtual status_t    pingBinder();virtual status_t    dump(int fd, const Vector<String16>& args);virtual status_t    transact(   uint32_t code,const Parcel& data,Parcel* reply,uint32_t flags = 0);virtual status_t    linkToDeath(const sp<DeathRecipient>& recipient,void* cookie = NULL,uint32_t flags = 0);virtual status_t    unlinkToDeath(  const wp<DeathRecipient>& recipient,void* cookie = NULL,uint32_t flags = 0,wp<DeathRecipient>* outRecipient = NULL);virtual void        attachObject(   const void* objectID,void* object,void* cleanupCookie,object_cleanup_func func);virtual void*       findObject(const void* objectID) const;virtual void        detachObject(const void* objectID);virtual BpBinder*   remoteBinder();status_t    setConstantData(const void* data, size_t size);void        sendObituary();class ObjectManager{public:ObjectManager();~ObjectManager();void        attach( const void* objectID,void* object,void* cleanupCookie,IBinder::object_cleanup_func func);void*       find(const void* objectID) const;void        detach(const void* objectID);void        kill();private:ObjectManager(const ObjectManager&);ObjectManager& operator=(const ObjectManager&);struct entry_t{void* object;void* cleanupCookie;IBinder::object_cleanup_func func;};KeyedVector<const void*, entry_t> mObjects;};protected:virtual             ~BpBinder();virtual void        onFirstRef();virtual void        onLastStrongRef(const void* id);virtual bool        onIncStrongAttempted(uint32_t flags, const void* id);private:const   int32_t             mHandle;struct Obituary {wp<DeathRecipient> recipient;void* cookie;uint32_t flags;};void                reportOneDeath(const Obituary& obit);bool                isDescriptorCached() const;mutable Mutex               mLock;volatile int32_t    mAlive;volatile int32_t    mObitsSent;Vector<Obituary>*   mObituaries;ObjectManager       mObjects;Parcel*             mConstantData;mutable String16            mDescriptorCache;}}3、Binder线程中的 IPCThreadState 对象{每个使用Binder进程通信机制的进程,都会有一个binder线程池,用来处理通信请求。binder线程内部有 IPCThreadState对象:可以使用 IPCThreadState.self()获取到可以调用它的 transact 和binder驱动程序交互,其内部是通过调用 talkWithDriver 来实现一方面向驱动程序传递通信请求,另一方面由负责接收来自binder驱动程序的进程间通信请求。进程内全局唯一的成员 IPCThreadState.mProcess,指向 ProcessState 对象负责初始化Binder设备,即打开 /dev/binder,映射到进程地址空间binder线程池中的每一个线程,都可以通过它来和binder驱动程序建立连接第一次调用 ProcessState:self() 时,会创建一个 ProcessState 对象,保存到 IPCThreadState.mProcess;创建过程中,会:open("/dev/binder");mmap();将设备文件,映射到进程的地址空间,即请求binder驱动程序,为此进程分配内核缓冲区。保存用户地址到 IPCThreadState.mProcess.mVMStart定义:ProcessState.hProcessState.cpp}4、每个进程唯一的 ProcessState 对象{1、定义./frameworks/native/include/binder/ProcessState.h./frameworks/native/libs/binder/ProcessState.cpp2、重要的结构struct handle_entry {IBinder* binder;//Binder代理对象        RefBase::weakref_type* refs;//弱引用计数对象    };Vector<handle_entry>mHandleToObject;binder库为每一个进程,维护了一个 handle_entry类型的 binder代理对象列表;索引:即为 binder 代理对象的句柄}}三、C++层 ServiceManager的架构\获取之详解{1、SM 组件的架构图{BpServiceManager-->BpInterface-->IServiceManager-->IInterface---->||                                           ||------->BpRefBase--------------------->|  {                                 |BpBinder-->IBinder----------->|  }    |                            |------>RefBase|---------------------------                             ||                                                           |IPCThreadState                                                      |{                                                                   |ProcessState---------------------------------------------->|}}2、 IServiceManager\BnServiceManager\BpServiceManager 定义及实现{0、重要的全局变量 {//-//framework/native/libs/binder/Static.cpp//////1、互斥锁,用来确保一个进程最多只有一个SM代理对象存在Mutex gDefaultServiceManagerLock;//指向进程内的一个 BpServiceManager 对象,若非空,则说明binder库已经为进程创建了一个//2、SM代理对象sp<IServiceManager> gDefaultServiceManager;sp<IPermissionController> gPermissionController;//3、互斥锁,确保进程最多只有一个 ProcessState 对象Mutex gProcessMutex;//4、指向进程内的 ProcessState 对象,一个强指针sp<ProcessState> gProcess;}1、 接口 IServiceManager 的定义和实现: IMPLEMENT_META_INTERFACE \ IServiceManager::asInterface{//-//framework/native/include/binder/IServiceManager.h//class IServiceManager : public IInterface{public:DECLARE_META_INTERFACE(ServiceManager);virtual sp<IBinder>         getService( const String16& name) const = 0;virtual sp<IBinder>         checkService( const String16& name) const = 0;virtual status_t            addService( const String16& name,const sp<IBinder>& service,bool allowIsolated = false) = 0;virtual Vector<String16>    listServices() = 0;enum {GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,CHECK_SERVICE_TRANSACTION,ADD_SERVICE_TRANSACTION,LIST_SERVICES_TRANSACTION,};}//-//宏实际定义在 frameworks/native/include/binder/IInterface.hIMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");{const android::String16 IServiceManager::descriptor("android.os.IServiceManager");const android::String16& IServiceManager::getInterfaceDescriptor() const {return IServiceManager::descriptor;}android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)                   {android::sp<IServiceManager> intr;if (obj != NULL) {intr = static_cast<IServiceManager*>(obj->queryLocalInterface(IServiceManager::descriptor).get());if (intr == NULL) {intr = new BpServiceManager(obj);                          }}return intr;}                                                                   IServiceManager::IServiceManager() { }IServiceManager::~IServiceManager() { }}}2、C++层-BnServiceManager 的定义和实现:BnServiceManager::onTransact{//-//frameworks/native/include/binder/IServiceManager.hclass BnServiceManager : public BnInterface<IServiceManager>{public:virtual status_t    onTransact( uint32_t code,const Parcel& data,Parcel* reply,uint32_t flags = 0);};//-//framework/native/libs/binder/IServiceManager.cppstatus_t BnServiceManager::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){//printf("ServiceManager received: "); data.print();switch(code) {case GET_SERVICE_TRANSACTION: {CHECK_INTERFACE(IServiceManager, data, reply);String16 which = data.readString16();sp<IBinder> b = const_cast<BnServiceManager*>(this)->getService(which);reply->writeStrongBinder(b);return NO_ERROR;} break;case CHECK_SERVICE_TRANSACTION: {CHECK_INTERFACE(IServiceManager, data, reply);String16 which = data.readString16();sp<IBinder> b = const_cast<BnServiceManager*>(this)->checkService(which);reply->writeStrongBinder(b);return NO_ERROR;} break;case ADD_SERVICE_TRANSACTION: {CHECK_INTERFACE(IServiceManager, data, reply);String16 which = data.readString16();sp<IBinder> b = data.readStrongBinder();status_t err = addService(which, b);reply->writeInt32(err);return NO_ERROR;} break;case LIST_SERVICES_TRANSACTION: {CHECK_INTERFACE(IServiceManager, data, reply);Vector<String16> list = listServices();const size_t N = list.size();reply->writeInt32(N);for (size_t i=0; i<N; i++) {reply->writeString16(list[i]);}return NO_ERROR;} break;default:return BBinder::onTransact(code, data, reply, flags);}}}3、C++层-BpServiceManager 的定义和实现: getService\addService\checkService{//-//framework/native/libs/binder/IServiceManager.cpp//class BpServiceManager : public BpInterface<IServiceManager>{public:BpServiceManager(const sp<IBinder>& impl) : BpInterface<IServiceManager>(impl){}virtual sp<IBinder> getService(const String16& name) const{unsigned n;for (n = 0; n < 5; n++){sp<IBinder> svc = checkService(name);if (svc != NULL) return svc;ALOGI("Waiting for service %s...\n", String8(name).string());sleep(1);}return NULL;}virtual sp<IBinder> checkService( const String16& name) const{Parcel data, reply;data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());data.writeString16(name);remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);return reply.readStrongBinder();}virtual status_t addService(const String16& name, const sp<IBinder>& service,bool allowIsolated){Parcel data, reply;data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());data.writeString16(name);data.writeStrongBinder(service);data.writeInt32(allowIsolated ? 1 : 0);status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);return err == NO_ERROR ? reply.readExceptionCode() : err;}virtual Vector<String16> listServices(){Vector<String16> res;int n = 0;for (;;) {Parcel data, reply;data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());data.writeInt32(n++);status_t err = remote()->transact(LIST_SERVICES_TRANSACTION, data, &reply);if (err != NO_ERROR)break;res.add(reply.readString16());}return res;}};}}4、通用的service组件代理对象获取:三步骤{1)、通过binder驱动程序,获得此组件的 句柄2)、根据这个句柄,创建 Binder代理对象3)、最后将 binder代理对象 封装成特定的 代理对象。1、何时、怎样获取SM代理对象{service组件启动时,要讲自己注册到SM中,首先要获取SM代理对象client组件使用service组件提供的服务之前,要首先获取SM代理对象,通过它进而去获得 Service组件的代理对象SMS句柄为0,因此获取SMS代理,无需与binder驱动程序交互}}5、SM代理对象-BpServiceManager-的获取过程:: defaultServiceManager//获取进程内的 ProcessState 对象:ProcessState::self()//创建 Binder 代理对象,即 BpBinder 对象//采用 interface_cast<IServiceManager> 将前面获取到的 Binder 代理对象,封装成 BpServiceManager{1、SM代理对象获取接口:defaultServiceManager//-//framework/native/libs/binder/IServiceManager.cpp////1)、检查全局变量 gDefaultServiceManager 是否非空//2)、若空,则调用 ProcessState::self()->getContextObject(NULL)); 新建 BpServiceManager 对象并返回//sp<IServiceManager> defaultServiceManager(){if (gDefaultServiceManager != NULL) return gDefaultServiceManager;{AutoMutex _l(gDefaultServiceManagerLock);while (gDefaultServiceManager == NULL) {gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));if (gDefaultServiceManager == NULL)sleep(1);}}return gDefaultServiceManager;}1.1、获取进程内的 ProcessState 对象:ProcessState::self()////-//ProcessState.cpp////1)、若 gProcess 非空,则说明进程中已有创建//2)、否则新建 ProcessState 保存到 gProcess//1)、调用 open_driver 打开 /dev/binder,保存fd到 mDriverFD//发送IOCTL命令 BINDER_VERSION 获取binder版本//发送IOCTL命令 BINDER_SET_MAX_THREADS 告知驱动程序,它最多可以请求进程创建//15个binder线程,用于处理进程间通信请求////2)、调用 mmap,将设备文件映射到进程地址空间,映射空间大小为 BINDER_VM_SIZE=1016KB//binder驱动程序会为进程分配内核缓冲区,其大小=1016KBsp<ProcessState> ProcessState::self(){Mutex::Autolock _l(gProcessMutex);if (gProcess != NULL) {return gProcess;}gProcess = new ProcessState;{//1)、打开 /dev/binder,保存fd到 mDriverFD: mDriverFD(open_driver()){//open_driver//ProcessState.cppint fd = open("/dev/binder", O_RDWR);if (fd >= 0) {fcntl(fd, F_SETFD, FD_CLOEXEC);int vers;status_t result = ioctl(fd, BINDER_VERSION, &vers);if (result == -1) {ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));close(fd);fd = -1;}if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {ALOGE("Binder driver protocol does not match user space protocol!");close(fd);fd = -1;}size_t maxThreads = 15;result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);if (result == -1) {ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));}} else {ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));}return fd;}, mVMStart(MAP_FAILED), mManagesContexts(false), mBinderContextCheckFunc(NULL), mBinderContextUserData(NULL), mThreadPoolStarted(false), mThreadPoolSeq(1)//2)、调用 mmap,将设备文件映射到进程地址空间,映射空间大小为 BINDER_VM_SIZE=1016KB//binder驱动程序会为进程分配内核缓冲区,其大小=1016KBif (mDriverFD >= 0) {// XXX Ideally, there should be a specific define for whether we// have mmap (or whether we could possibly have the kernel module// availabla).// mmap the binder, providing a chunk of virtual address space to receive transactions.mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);if (mVMStart == MAP_FAILED) {// *sigh*ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");close(mDriverFD);mDriverFD = -1;}}}return gProcess;}1.2、创建 Binder 代理对象,即 BpBinder 对象:ProcessState::getContextObject////-//ProcessState.cpp////1)、查询 mHandleToObject<> 列表中,已存在一个与句柄handle对象对应的的 handle_entry*e//handle 是binder代理对象在 mHandleToObject<>中的索引//如果 handle>=mHandleToObject.size(),则须在 [N,handle]位置上分别插入 handle_entry////2)、如果进程尚未给handler句柄 创建对应的binder代理对象,则新建 BpBinder,将其保存到 e->binder////3)、否则,之前已经有创建对应的binder代理对象,则须在此检查它是否还活着//由于 BpBinder 对象是受到若引用计数控制的,如果尝试增加其引用计数失败//[ e->refs->attemptIncWeak(this)==0 ],则说明它已被销毁。//若依然有效,则直接返回它sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller){return getStrongProxyForHandle(0/*int32_t handle*/);{sp<IBinder> result;AutoMutex _l(mLock);//1)、查询 mHandleToObject<> 列表中,已存在一个与句柄handle对象的 handle_entry*e//handle 是binder代理对象在 mHandleToObject<>中的索引//如果 handle>=mHandleToObject.size(),则须在 [N,handle]位置上分别插入 handle_entryhandle_entry* e = lookupHandleLocked(handle);{const size_t N=mHandleToObject.size();if (N <= (size_t)handle) {handle_entry e;e.binder = NULL;e.refs = NULL;status_t err = mHandleToObject.insertAt(e, N, handle+1-N);if (err < NO_ERROR) return NULL;}return &mHandleToObject.editItemAt(handle);}if (e != NULL) {//2)、如果进程尚未给handler句柄创建对应的binder代理对象,则新建 BpBinder,将其保存到 e->binderIBinder* b = e->binder;if (b == NULL || !e->refs->attemptIncWeak(this)) {if (handle == 0) {Parcel data;status_t status = IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0);if (status == DEAD_OBJECT)   return NULL;}b = new BpBinder(handle); e->binder = b;if (b) e->refs = b->getWeakRefs();result = b;}//否则,之前已经有创建对应的binder代理对象,则须在此检查它是否还活着//由于 BpBinder 对象是受到若引用计数控制的,如果尝试增加其引用计数失败//[ e->refs->attemptIncWeak(this)==0 ],则说明它已被销毁。//若依然有效,则直接返回它else {result.force_set(b);e->refs->decWeak(this);}}return result;}}1.3、采用 interface_cast<IServiceManager> 将前面获取到的 Binder 代理对象,封装成 BpServiceManager//-//frameworks/native/include/binder/IInterface.h//将 BpBinder 对象转换成 IServiceManager 接口//如果 obj 指向一个 BpServiceManager 对象,则调用其成员 queryLocalInterface //返回一个 IServiceManager 接口//否则如果指向 BpBinder 对象,则使用此 BpBinder 封装一个 BpServiceManagertemplate<typename INTERFACE>inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj){return INTERFACE::asInterface(obj);{android::sp<IServiceManager> intr;if (obj != NULL) {intr = static_cast<IServiceManager*>(obj->queryLocalInterface(IServiceManager::descriptor).get());if (intr == NULL) {intr = new BpServiceManager(obj);}}return intr;}}}6、C++层 defaultServiceManager 的调用场所{./frameworks/native/include/binder/BinderService.h:38:        sp<IServiceManager> sm(defaultServiceManager());./frameworks/native/include/binder/IServiceManager.h:66:sp<IServiceManager> defaultServiceManager();./frameworks/native/include/binder/IServiceManager.h:71:    const sp<IServiceManager> sm = defaultServiceManager();./frameworks/native/services/connectivitymanager/ConnectivityManager.cpp:29:    const sp<IServiceManager> sm(defaultServiceManager());./frameworks/native/services/surfaceflinger/main_surfaceflinger.cpp:51:    sp<IServiceManager> sm(defaultServiceManager());./frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp:294:    sp<IBinder> window(defaultServiceManager()->getService(name));./frameworks/native/services/sensorservice/BatteryService.cpp:34:    const sp<IServiceManager> sm(defaultServiceManager());./frameworks/native/libs/binder/AppOpsManager.cpp:48:        sp<IBinder> binder = defaultServiceManager()->checkService(_appops);./frameworks/native/libs/binder/IServiceManager.cpp:34:sp<IServiceManager> defaultServiceManager()./frameworks/native/libs/binder/IServiceManager.cpp:106:        sp<IBinder> binder = defaultServiceManager()->checkService(_permission);./frameworks/native/cmds/service/service.cpp:71:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/native/cmds/dumpsys/dumpsys.cpp:32:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/native/cmds/atrace/atrace.cpp:343:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/camera/CameraBase.cpp:73:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/include/common_time/ICommonClock.h:93:        sp<IBinder> binder = defaultServiceManager()->checkService(./frameworks/av/include/common_time/ICommonTimeConfig.h:58:        sp<IBinder> binder = defaultServiceManager()->checkService(./frameworks/av/drm/libdrmframework/DrmManagerClientImpl.cpp:50:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/drm/drmserver/main_drmserver.cpp:32:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/drm/drmserver/DrmManagerService.cpp:42:    defaultServiceManager()->addService(String16("drm.drmManager"), new DrmManagerService());./frameworks/av/services/audioflinger/SchedulingPolicyService.cpp:40:            sp<IBinder> binder = defaultServiceManager()->checkService(_scheduling_policy);./frameworks/av/services/audioflinger/AudioFlinger.cpp:380:            sp<IBinder> binder = defaultServiceManager()->getService(String16("media.log"));./frameworks/av/services/audioflinger/AudioFlinger.cpp:411:    sp<IBinder> binder = defaultServiceManager()->getService(String16("media.log"));./frameworks/av/services/audioflinger/AudioFlinger.cpp:427:    sp<IBinder> binder = defaultServiceManager()->getService(String16("media.log"));./frameworks/av/services/audioflinger/Threads.cpp:557:            defaultServiceManager()->checkService(String16("power"));./frameworks/av/libvideoeditor/lvpp/PreviewPlayer.cpp:48:        defaultServiceManager()->getService(String16("media.player"));./frameworks/av/cmds/stagefright/stream.cpp:339:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/cmds/stagefright/stagefright.cpp:814:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/cmds/stagefright/stagefright.cpp:876:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/cmds/stagefright/stagefright.cpp:890:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libmedia/IMediaDeathNotifier.cpp:40:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libmedia/mediametadataretriever.cpp:39:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libmedia/AudioSystem.cpp:54:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libmedia/AudioSystem.cpp:81:    if (defaultServiceManager()->checkService(String16("media.audio_flinger")) != 0)./frameworks/av/media/libmedia/AudioSystem.cpp:527:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/mediaserver/main_mediaserver.cpp:100:            sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/mediaserver/main_mediaserver.cpp:125:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libstagefright/omx/tests/OMXHarness.cpp:57:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libstagefright/AwesomePlayer.cpp:180:        defaultServiceManager()->getService(String16("media.player"));./frameworks/av/media/libstagefright/wifi-display/source/WifiDisplaySource.cpp:1692:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libstagefright/OMXClient.cpp:376:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libstagefright/TimedEventQueue.cpp:323:                defaultServiceManager()->checkService(String16("power"));./frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp:60:        defaultServiceManager()->getService(String16("media.player"));./frameworks/av/media/libmediaplayerservice/ActivityManager.cpp:35:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp:205:    defaultServiceManager()->addService(./frameworks/base/native/android/storage_manager.cpp:88:        sp<IServiceManager> sm = defaultServiceManager();./frameworks/base/core/jni/android_media_RemoteDisplay.cpp:140:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/base/services/common_time/common_time_config_service.cpp:30:    defaultServiceManager()->addService(ICommonTimeConfig::kServiceName, ctcs);./frameworks/base/services/common_time/common_clock_service.cpp:32:    defaultServiceManager()->addService(ICommonClock::kServiceName, tcc);./frameworks/base/media/jni/android_media_MediaPlayer.cpp:726:    sp<IBinder> binder = defaultServiceManager()->getService(String16("media.player"));./frameworks/base/media/jni/android_media_MediaDrm.cpp:294:    sp<IServiceManager> sm = defaultServiceManager();./frameworks/base/media/jni/android_media_MediaCrypto.cpp:63:    sp<IServiceManager> sm = defaultServiceManager();}}四、C++层 Binder通信实例-FregService service{0、 Client和server进程的一次进程间通信的五个步骤{1、 Client将通信数据封装成 Parcel 对象2、 Client向Binder驱动发送 BC_TRANSACTION 命令协议Binder驱动程序根据协议内容,找到Server进程,之后向Client发送 BR_TRANSACTION_COMPLETE,表示其已接收本次通信请求。Client收到 BR_TRANSACTION_COMPLETE 并对其做好处理后,再次进到 Binder驱动程序中,去等待目标Server进程返回进程间通信结果。3、 Binder 向Client 发送 BR_TRANSACTION_COMPLETE 的同时,会向server进程发送 BR_TRANSACTION 返回协议,用于请求Server进程处理 通信请求。4、 Server进程收到 BR_TRANSACTION 并对其做好处理后,向 Binder 发送 BC_REPLY 命令协议Binder根据协议内容,找到目标Client进程后,向Server发送 BR_TRANSACTION_COMPLETE 表示其返回的进程间通信结果已经被接收。Server 收到 BR_TRANSACTION_COMPLETE,并对其做好处理后,会进入Binder驱动,继续等待下一次的通信请求。5、 Binder 向Server 发送 BR_TRANSACTION_COMPLETE 的同时,会向Client进程发送 BR_REPLY 返回协议,并将本次通信结果返回给它。}1、基于框架层Binder接口的server与client通信实例基本构成:server进程:实现一个service组件,并向client提供服务Client进程模块划分:Common,实现如下接口:HW访问服务接口 IFregServicebinder本地对象类 BnfregServicebinder代理对象类BpFregServiceserver实现server进程包含一个service组件 FregServiceclient实现client进程通过 BpFregService 去访问运行在server进程中的FregService组件实所提供的服务2、 IFregService|BnFregService|BpFregService 的实现{//--------------------------------- IFregService.h////include <utils/RefBase.h>//include <binder/IInterface.h>//include <binder/Parcel.h>//define FREG_SERVICE "shy.luo.FregService"using namespace android;class IFregService: public IInterface{public://为 IFregService 定义构造、析构函数、IBinder转换成 IFregService 的转换接口、DECLARE_META_INTERFACE(/*INTERFACE*/FregService);{//IInterface.hstatic const android::String16 descriptor;                          static android::sp<IFregService> asInterface(const android::sp<android::IBinder>& obj);                  virtual const android::String16& getInterfaceDescriptor() const;    IFregService();                                                     virtual ~IFregService();                                            }virtual int32_t getVal() = 0;virtual void setVal(int32_t val) = 0;};class BnFregService: public BnInterface<IFregService>{public:virtual status_t onTransact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0);};//--------------------------------- IFregService.cpp////define LOG_TAG "IFregService"//include <utils/Log.h>//include "IFregService.h"using namespace android;//定义进程间通信编号enum {GET_VAL = IBinder::FIRST_CALL_TRANSACTION,SET_VAL};//-----------------------定义Binder代理类,继承于 BpInterface,实现了 IFregService 接口class BpFregService: public BpInterface<IFregService>{public:BpFregService(const sp<IBinder>& impl) : BpInterface<IFregService>(impl){}public://将要传递的数据封装在 Parcel data中;//调用父类 BpRefBase 之成员函数 remote(),获得 BpBinder 代理对象//调用 BpBinder.transact 来请求运行在server进程中的一个binder本地对象执行 GET_VAL 操作int32_t getVal(){Parcel data;data.writeInterfaceToken(IFregService::getInterfaceDescriptor());Parcel reply;remote()->transact(GET_VAL, data, &reply);int32_t val = reply.readInt32();return val;}void setVal(int32_t val){Parcel data;data.writeInterfaceToken(IFregService::getInterfaceDescriptor());data.writeInt32(val);Parcel reply;remote()->transact(SET_VAL, data, &reply);}};//-----------------------实现 IFregService IMPLEMENT_META_INTERFACE(FregService, "shy.luo.IFregService");{//INTERFACE, NAME//设定静态成员const android::String16 IFregService::descriptor("shy.luo.IFregService");    //实现成员函数const android::String16& IFregService::getInterfaceDescriptor() const {              return IFregService::descriptor;                                }                                  //将IBinder对象转换成 IFregService 接口//如果 obj 指向一个 BpFregService 对象,则调用其成员 queryLocalInterface //返回一个 IFregService 接口//否则如果指向 BpBinder 对象,则使用此 BpBinder 封装一个 BpFregServiceandroid::sp<IFregService> IFregService::asInterface(const android::sp<android::IBinder>& obj)                   {                                                                   android::sp<IFregService> intr;                                 if (obj != NULL) {                                              intr = static_cast<IFregService*>(obj->queryLocalInterface(IFregService::descriptor).get());               if (intr == NULL) {                                         intr = new BpFregService(obj);                          }                                                           }                                                               return intr;                                                    }                                                                   IFregService::IFregService() { }                                    IFregService::~IFregService() { }                   }//-----------------------实现 BnFregService //BnFregService::onTransact //负责将 GET_VAL | SET_VAL 通信请求分发给其子类的成员函数 setVal|getVal 来处理status_t BnFregService::onTransact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){switch(code){case GET_VAL:{CHECK_INTERFACE(IFregService, data, reply);int32_t val = getVal();reply->writeInt32(val);return NO_ERROR;}case SET_VAL:{CHECK_INTERFACE(IFregService, data, reply);int32_t val = data.readInt32();setVal(val);return NO_ERROR;}default:{return BBinder::onTransact(code, data, reply, flags);}}}}3、 FregService 的实现{//--------------------------------- FregServer.cpp////define LOG_TAG "FregServer"//include <stdlib.h>//include <fcntl.h>//include <utils/Log.h>//include <binder/IServiceManager.h>//include <binder/IPCThreadState.h>//include "../common/IFregService.h"//define FREG_DEVICE_NAME "/dev/freg"//-----------------------实现本地服务 FregService 组件//继承了 BnFregService//实现  IFregService 接口class FregService : public BnFregService{public:FregService(){fd = open(FREG_DEVICE_NAME, O_RDWR);if(fd == -1) {LOGE("Failed to open device %s.\n", FREG_DEVICE_NAME);}}virtual ~FregService(){if(fd != -1) {close(fd);}}public:static void instantiate(){defaultServiceManager()->addService(String16(FREG_SERVICE), new FregService());}int32_t getVal(){int32_t val = 0;if(fd != -1) {read(fd, &val, sizeof(val));}return val;}void setVal(int32_t val){if(fd != -1) {write(fd, &val, sizeof(val));}}private:int fd;};//-----------------------实现server进程的入口//1、调用 FregService 静态函数 instantiate:新建一个 FregService 组件,添加到 SM//2、启动Binder线程池//3、调用主线程 IPCThreadState 对象的 joinThreadPool,将主线程添加到进程的Binder线程池中。int main(int argc, char** argv){FregService::instantiate();ProcessState::self()->startThreadPool();//2、启动Binder线程池IPCThreadState::self()->joinThreadPool();//return 0;}}4、 client的实现{//--------------------------------- FregServer.cpp////define LOG_TAG "FregClient"//include <utils/Log.h>//include <binder/IServiceManager.h>//include "../common/IFregService.h"int main(){//1、获取SM的代理对象,再通过它获取到名称为 FREG_SERVICE 的service组件的一个 BpBinder //的代理对象sp<IBinder> binder = defaultServiceManager()->getService(String16(FREG_SERVICE));if(binder == NULL) {LOGE("Failed to get freg service: %s.\n", FREG_SERVICE);return -1;}//2、将获取到的 BpBinder 代理对象,封装成 BpFregService 代理对象,//并取得它的 IFregService 接口保存到 servicesp<IFregService> service = IFregService::asInterface(binder);if(service == NULL) {LOGE("Failed to get freg service interface.\n");return -2;}//3、通过service去访问、获取数据信息printf("Read original value from FregService:\n");int32_t val = service->getVal();printf(" %d.\n", val);printf("Add value 1 to FregService.\n");val += 1;service->setVal(val);printf("Read the value from FregService again:\n");val = service->getVal();printf(" %d.\n", val); return 0;}}5、 FregService启动过程,通过 ServiceManager 注册组件: defaultServiceManager()->addService(){//1、 FregService server 进程入口函数//1)、调用 FregService 静态函数 instantiate:新建一个 FregService 组件,添加到 SM//调用 defaultServiceManager() 返回SM代理对象即 BpServiceManager 对象//2)、启动Binder线程池//3)、调用主线程 IPCThreadState 对象的 joinThreadPool,将主线程添加到进程的Binder线程池中。FregServer.cpp: main(int argc, char** argv){FregService::instantiate();{//FregService.clsdefaultServiceManager()->addService(String16(FREG_SERVICE), new FregService());}ProcessState::self()->startThreadPool();//2、启动Binder线程池IPCThreadState::self()->joinThreadPool();//return 0;}//1.1、通过SM代理对象注册 FregService 到 SM中BpServiceManager::addService(const String16& name, const sp<IBinder>& service,bool allowIsolated)    {//1、封装进程间通信数据到 Parcel data 对象中//1)、通信请求头,调用 writeInterfaceToken 写入//第一部分://整数值,描述 Strict Mode Policy,如果线程在运行过程中违反了这些策略//则系统会发出警告。比如禁止在UI线程中执行磁盘读写操作。//首先要获取 server线程的策略 //然后加上 STRICT_MODE_PENALTY_GATHER,表示即使server线程运行过程中违反了预先设定的//policy,系统也不会发出警告。而是将警告收起来,以后统一发给client线程来处理。//第二部分://字符串,描述所请求服务的类描述符,比如 com.android.xxxService//2)、其他信息//service名称FREG_SERVICE//服务对象new FregService()        Parcel data, reply;        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());{//Parcel.cppwriteInt32(IPCThreadState::self()->getStrictModePolicy() |STRICT_MODE_PENALTY_GATHER);// currently the interface identification token is just its name as a stringreturn writeString16(interface);}        data.writeString16(name);//3)、调用 writeStrongBinder 将待注册的service对象封装成 flat_binder_object 结构体//定义 flat_binder_object obj//设定 obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;//0X7F:待注册组件在处理进程通信请求时,其使用的server进程优先级不能<0x7f//后者:可以将 包含有FD的通信请求数据 传递给待注册组件//返回binder本地对象指针,即 FregService 对象//他们的继承关系:FregService->BnFregService->BnInterface->BBinder->IBinder//将此对象及其弱引用计数存储到 obj//obj.type = BINDER_TYPE_BINDER;//obj.binder = local->getWeakRefs();//obj.cookie = local;//调用 Parcel::finish_flatten_binder->Parcel::writeObject //将 obj 存储到 data->mData[]当中//将此 obj 在 data->mData[]当中 对应的index,存储到 data->mObjects//若 obj.type 类型为 BINDER_TYPE_FD,则设定 data->mHasFds = data->mFdsKnown = true;//调用 finishWrite 调整 data->mData[]的下一个index值,即为 data->mDataPos        data.writeStrongBinder(service);{//native/libs/binder/Parcel.cpp//const sp<IBinder>& valreturn flatten_binder(ProcessState::self(), val, this);{//const sp<ProcessState>& proc, const sp<IBinder>& binder, Parcel* outflat_binder_object obj;obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;//返回binder本地对象指针,即 FregService 对象//他们的继承关系:FregService->BnFregService->BnInterface->BBinder->IBinder//将此对象及其弱引用计数存储到 obj//obj.type = BINDER_TYPE_BINDER;//obj.binder = local->getWeakRefs();//obj.cookie = local;//调用 Parcel::finish_flatten_binder->Parcel::writeObject //将 obj 存储到 data->mData[]当中//将此 obj 在 data->mData[]当中 对应的index,存储到 data->mObjects//若 obj.type 类型为 BINDER_TYPE_FD,则设定 data->mHasFds = data->mFdsKnown = true;//调用 finishWrite 调整 data->mData[]的下一个index值,即为 data->mDataPosif (binder != NULL) {IBinder *local = binder->localBinder();if (!local) {BpBinder *proxy = binder->remoteBinder();if (proxy == NULL) {ALOGE("null proxy");}const int32_t handle = proxy ? proxy->handle() : 0;obj.type = BINDER_TYPE_HANDLE;obj.handle = handle;obj.cookie = NULL;} else {obj.type = BINDER_TYPE_BINDER;obj.binder = local->getWeakRefs();obj.cookie = local;}} else {obj.type = BINDER_TYPE_BINDER;obj.binder = NULL;obj.cookie = NULL;}return finish_flatten_binder(binder, obj, out);{//const sp<IBinder>& binder, const flat_binder_object& flat, Parcel* outreturn out->writeObject(flat, false);{//const flat_binder_object& val, bool nullMetaDataconst bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;const bool enoughObjects = mObjectsSize < mObjectsCapacity;if (enoughData && enoughObjects) {restart_write:*reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;// Need to write meta-data?if (nullMetaData || val.binder != NULL) {mObjects[mObjectsSize] = mDataPos;acquire_object(ProcessState::self(), val, this);mObjectsSize++;}// remember if it's a file descriptorif (val.type == BINDER_TYPE_FD) {if (!mAllowFds) {return FDS_NOT_ALLOWED;}mHasFds = mFdsKnown = true;}return finishWrite(sizeof(flat_binder_object));}if (!enoughData) {const status_t err = growData(sizeof(val));if (err != NO_ERROR) return err;}if (!enoughObjects) {size_t newSize = ((mObjectsSize+2)*3)/2;size_t* objects = (size_t*)realloc(mObjects, newSize*sizeof(size_t));if (objects == NULL) return NO_MEMORY;mObjects = objects;mObjectsCapacity = newSize;}goto restart_write;}}}}        data.writeInt32(allowIsolated ? 1 : 0);        //2、调用 BpRefBase 内部成员 BpBinder 对象的 transact 接口,发送 ADD_SERVICE_TRANSACTION 请求//-//native/libs/binder/BpBinder.cpp//调用当前线程的 IPCThreadState::transact,向binder驱动程序发送指令//参数 BpBinder::mHandle,是描述该binder代理对象的句柄值,当前代理对象是 BpServiceManager//句柄为0//1)、检查 data 中存储的进程间通信数据是否正确//2)、flags 置位 TF_ACCEPT_FDS 表示允许server进程在返回结果中携带 FD//3)、调用 IPCThreadState.writeTransactionData 将 data 写入到 binder_transaction_data 结构体//4)、调用 IPCThreadState.waitForResponse 向驱动发送 BC_TRANSACTION 命令协议status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);//注意最后一个参数默认为0{//native/libs/binder/BpBinder.cpp//uint32_t code, const Parcel& data, Parcel* reply, uint32_t flagsif (mAlive) {status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);{//IPCThreadState.cpp//IPCThreadState.cpp/*int32_t handle,uint32_t code, const Parcel& data,Parcel*reply, uint32_t flags*///检查 data 中存储的进程间通信数据是否正确status_t err = data.errorCheck();//flags 置位 TF_ACCEPT_FDS 表示允许server进程在返回结果中携带 FDflags |= TF_ACCEPT_FDS;IF_LOG_TRANSACTIONS() {TextOutput::Bundle _b(alog);alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "<< handle << " / code " << TypeCode(code) << ": "<< indent << data << dedent << endl;}//1.1.2.3 调用 writeTransactionData 将 data 写入到 binder_transaction_data 结构体if (err == NO_ERROR) {LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);}if (err != NO_ERROR) {if (reply) reply->setError(err);return (mLastError = err);}//1.1.2.4 调用 IPCThreadState.waitForResponse 向驱动发送 BC_TRANSACTION 命令协议if ((flags & TF_ONE_WAY) == 0) {if (reply) {err = waitForResponse(reply);} else {Parcel fakeReply;err = waitForResponse(&fakeReply);}IF_LOG_TRANSACTIONS() {TextOutput::Bundle _b(alog);alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "<< handle << ": ";if (reply) alog << indent << *reply << dedent << endl;else alog << "(none requested)" << endl;}} else {err = waitForResponse(NULL, NULL);}return err;}if (status == DEAD_OBJECT) mAlive = 0;return status;}return DEAD_OBJECT;}        return err == NO_ERROR ? reply.readExceptionCode() : err;    }//1.1.2.3 调用 writeTransactionData 将 data 写入到 binder_transaction_data 结构体//1、存储data数据到 binder_transaction_data tr//.target.handle = 0;//.code = ADD_SERVICE_TRANSACTION;//.flags = TF_ACCEPT_FDS;//.cookie = 0;//.sender_pid = 0;//.sender_euid = 0;//将 data内部的数据缓冲区、偏移数组指针存储到 tr//.data_size = data.mDataSize//.data.ptr.buffer = data.mData;//{ FLAT_BINDER_FLAG_ACCEPTS_FDS,"android.os.IServiceManager","shy.luo.FregService",flat_binder_object}//.offsets_size = data.mObjectsSize*sizeof(size_t);//.data.ptr.offsets = data.mObjects//2、存储 BC_TRANSACTION 到 IPCThreadState.mOut//3、存储 binder_transaction_data 数据到 IPCThreadState.mOutIPCThreadState::writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);{/*int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer */binder_transaction_data tr;tr.target.handle = handle;tr.code = code;tr.flags = binderFlags;tr.cookie = 0;tr.sender_pid = 0;tr.sender_euid = 0;const status_t err = data.errorCheck();if (err == NO_ERROR) {tr.data_size = data.ipcDataSize();tr.data.ptr.buffer = data.ipcData();tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);tr.data.ptr.offsets = data.ipcObjects();} else if (statusBuffer) {tr.flags |= TF_STATUS_CODE;*statusBuffer = err;tr.data_size = sizeof(status_t);tr.data.ptr.buffer = statusBuffer;tr.offsets_size = 0;tr.data.ptr.offsets = NULL;} else {return (mLastError = err);}mOut.writeInt32(cmd);mOut.write(&tr, sizeof(tr));return NO_ERROR;}//1.1.2.4 调用 IPCThreadState.waitForResponse 向驱动发送 BC_TRANSACTION 命令协议//1、 循环调用 talkWithDriver 来与binder交互,发送命令、接收通信结果IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){int32_t cmd;int32_t err;while (1) {if ((err=talkWithDriver()) < NO_ERROR) break;err = mIn.errorCheck();if (err < NO_ERROR) break;if (mIn.dataAvail() == 0) continue;cmd = mIn.readInt32();IF_LOG_COMMANDS() {alog << "Processing waitForResponse Command: "<< getReturnString(cmd) << endl;}switch (cmd) {case BR_TRANSACTION_COMPLETE:if (!reply && !acquireResult) goto finish;break;case BR_DEAD_REPLY:err = DEAD_OBJECT;goto finish;case BR_FAILED_REPLY:err = FAILED_TRANSACTION;goto finish;case BR_ACQUIRE_RESULT:{ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");const int32_t result = mIn.readInt32();if (!acquireResult) continue;*acquireResult = result ? NO_ERROR : INVALID_OPERATION;}goto finish;case BR_REPLY:{binder_transaction_data tr;err = mIn.read(&tr, sizeof(tr));ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");if (err != NO_ERROR) goto finish;if (reply) {if ((tr.flags & TF_STATUS_CODE) == 0) {reply->ipcSetDataReference(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(size_t),freeBuffer, this);} else {err = *static_cast<const status_t*>(tr.data.ptr.buffer);freeBuffer(NULL,reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(size_t), this);}} else {freeBuffer(NULL,reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(size_t), this);continue;}}goto finish;default:err = executeCommand(cmd);if (err != NO_ERROR) goto finish;break;}}finish:if (err != NO_ERROR) {if (acquireResult) *acquireResult = err;if (reply) reply->setError(err);mLastError = err;}return err;}//1.1.2.4.1、 waitForResponse 循环调用 talkWithDriver 来与binder交互////检查协议缓冲区 mIn 中的返回协议是否已处理完成,是则 needRead = true//如果 mIn 中的返回协议是否已处理完成,且调用者不只是希望接收驱动发送的返回协议//则设定为 outAvail=mOut.dataSize();否则设置 outAvail =0; 保存到 bwr.write_size//设定 binder_write_read 输出缓冲区为 mOut.data()////如果 调用者希望接收返回协议,且 mIn 中的返回协议是否已处理完成,//则设置 binder_write_read 输入缓冲区为 mIn.data();////如果输\入出缓冲区size都为0,则直接返回////调用 BINDER_WRITE_READ ioctl 与驱动交互//本次调用中 binder_write_read 两个缓冲区均为非空,则//驱动首先调用 binder_thread_write 处理 BC_TRANSACTION//驱动然后调用 binder_thread_read 读取驱动返回的协议////从驱动返回后//首先将已经处理的命令协议从 mOut 移除//然后将从驱动读取的返回协议,存储到 mIn 中//IPCThreadState::talkWithDriver(bool doReceive/*默认为true*/){if (mProcess->mDriverFD <= 0) {return -EBADF;}binder_write_read bwr;//检查协议缓冲区 mIn 中的返回协议是否已处理完成,是则 needRead = true// Is the read buffer empty?const bool needRead = mIn.dataPosition() >= mIn.dataSize();// We don't want to write anything if we are still reading// from data left in the input buffer and the caller// has requested to read the next data.//如果 mIn 中的返回协议是否已处理完成,且调用者不只是希望接收驱动发送的返回协议//则设定为 outAvail=mOut.dataSize();否则设置 outAvail =0; 保存到 bwr.write_size//设定 binder_write_read 输出缓冲区为 mOut.data()const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;bwr.write_size = outAvail;bwr.write_buffer = (long unsigned int)mOut.data();// This is what we'll read.//如果 调用者希望接收返回协议,且 mIn 中的返回协议已处理完成,//则设置 binder_write_read 输入缓冲区为 mIn.data();if (doReceive && needRead) {bwr.read_size = mIn.dataCapacity();bwr.read_buffer = (long unsigned int)mIn.data();} else {bwr.read_size = 0;bwr.read_buffer = 0;}// Return immediately if there is nothing to do.//如果输\入出缓冲区size都为0,则直接返回if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;bwr.write_consumed = 0;bwr.read_consumed = 0;status_t err;//调用 BINDER_WRITE_READ ioctl 与驱动交互//本次调用中 binder_write_read 两个缓冲区均为非空,则//驱动首先调用 binder_thread_write 处理 BC_TRANSACTION//驱动然后调用 binder_thread_read 读取驱动返回的协议//do {//if defined(HAVE_ANDROID_OS)if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)err = NO_ERROR;elseerr = -errno;//elseerr = INVALID_OPERATION;//endifif (mProcess->mDriverFD <= 0) {err = -EBADF;}} while (err == -EINTR);//从驱动返回后//首先将已经处理的命令协议从 mOut 移除//然后将从驱动读取的返回协议,存储到 mIn 中if (err >= NO_ERROR) {if (bwr.write_consumed > 0) {if (bwr.write_consumed < (ssize_t)mOut.dataSize())mOut.remove(0, bwr.write_consumed);elsemOut.setDataSize(0);}if (bwr.read_consumed > 0) {mIn.setDataSize(bwr.read_consumed);mIn.setDataPosition(0);}IF_LOG_COMMANDS() {TextOutput::Bundle _b(alog);alog << "Remaining data size: " << mOut.dataSize() << endl;alog << "Received commands from driver: " << indent;const void* cmds = mIn.data();const void* end = mIn.data() + mIn.dataSize();alog << HexDump(cmds, mIn.dataSize()) << endl;while (cmds < end) cmds = printReturnCommand(alog, cmds);alog << dedent;}return NO_ERROR;}return err;}ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) {//binder_ioctlcase BINDER_WRITE_READ: {binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size,&bwr.write_consumed);{/*struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed */uint32_t cmd;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;while (ptr < end && thread->return_error == BR_OK) {if (get_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);trace_binder_command(cmd);if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {binder_stats.bc[_IOC_NR(cmd)]++;proc->stats.bc[_IOC_NR(cmd)]++;thread->stats.bc[_IOC_NR(cmd)]++;}switch (cmd) {case BC_TRANSACTION:case BC_REPLY: {struct binder_transaction_data tr;if (copy_from_user(&tr, ptr, sizeof(tr)))return -EFAULT;ptr += sizeof(tr);binder_transaction(proc, thread, &tr, cmd == BC_REPLY);break;}}*consumed = ptr - buffer;}}binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);}}binder_transaction{/*struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply当前要处理的协议是否为 BC_REPLY */struct binder_transaction *t;struct binder_work *tcomplete;size_t *offp, *off_end;struct binder_proc *target_proc;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct list_head *target_list;wait_queue_head_t *target_wait;struct binder_transaction *in_reply_to = NULL;struct binder_transaction_log_entry *e;uint32_t return_error;e = binder_transaction_log_add(&binder_transaction_log);e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);e->from_proc = proc->pid;e->from_thread = thread->pid;e->target_handle = tr->target.handle;e->data_size = tr->data_size;e->offsets_size = tr->offsets_size;//1、处理 BC_REPLY 协议if (reply) {in_reply_to = thread->transaction_stack;if (in_reply_to == NULL) {binder_user_error("binder: %d:%d got reply transaction "  "with no transaction stack\n",  proc->pid, thread->pid);return_error = BR_FAILED_REPLY;goto err_empty_call_stack;}binder_set_nice(in_reply_to->saved_priority);if (in_reply_to->to_thread != thread) {binder_user_error("binder: %d:%d got reply transaction ""with bad transaction stack,"" transaction %d has target %d:%d\n",proc->pid, thread->pid, in_reply_to->debug_id,in_reply_to->to_proc ?in_reply_to->to_proc->pid : 0,in_reply_to->to_thread ?in_reply_to->to_thread->pid : 0);return_error = BR_FAILED_REPLY;in_reply_to = NULL;goto err_bad_call_stack;}thread->transaction_stack = in_reply_to->to_parent;target_thread = in_reply_to->from;if (target_thread == NULL) {return_error = BR_DEAD_REPLY;goto err_dead_binder;}if (target_thread->transaction_stack != in_reply_to) {binder_user_error("binder: %d:%d got reply transaction ""with bad target transaction stack %d, ""expected %d\n",proc->pid, thread->pid,target_thread->transaction_stack ?target_thread->transaction_stack->debug_id : 0,in_reply_to->debug_id);return_error = BR_FAILED_REPLY;in_reply_to = NULL;target_thread = NULL;goto err_dead_binder;}target_proc = target_thread->proc;} //2、处理 BC_TRANSACTIONelse {//1)、根据句柄值获取到 目标binder实体对象 target_node//获得与句柄值 tr->target.handle 对应的binder引用对象 ref ,然后设定 target_node = ref->node;if (tr->target.handle) {struct binder_ref *ref;ref = binder_get_ref(proc, tr->target.handle);if (ref == NULL) {binder_user_error("binder: %d:%d got ""transaction to invalid handle\n",proc->pid, thread->pid);return_error = BR_FAILED_REPLY;goto err_invalid_target_handle;}target_node = ref->node;}//目标 BpBinder 对象对应的句柄为 0,则 target_node= binder_context_mgr_nodeelse {target_node = binder_context_mgr_node;if (target_node == NULL) {return_error = BR_DEAD_REPLY;goto err_no_context_mgr_node;}}e->to_node = target_node->debug_id;//2)、根据 目标binder实体对象 target_node,获取到目标进程 target_proc=target_node->proc;target_proc = target_node->proc;if (target_proc == NULL) {return_error = BR_DEAD_REPLY;goto err_dead_binder;}if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) {return_error = BR_FAILED_REPLY;goto err_invalid_target_handle;}//3)、如果当前正在处理的进程间通信请求是同步的[r->flags & TF_ONE_WAY==0]//则在此启用优化方案,寻找目标进程中的第二类型 空闲binder线程,来处理 BC_TRANSACTION //返回协议,保存到 target_thread//第一类空闲线程:无事可做的线程//第二类空闲线程:正在处理某个事务的过程中,需要等待其他线程来完成另外一个事务if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {struct binder_transaction *tmp;tmp = thread->transaction_stack;if (tmp->to_thread != thread) {binder_user_error("binder: %d:%d got new ""transaction with bad transaction stack"", transaction %d has target %d:%d\n",proc->pid, thread->pid, tmp->debug_id,tmp->to_proc ? tmp->to_proc->pid : 0,tmp->to_thread ?tmp->to_thread->pid : 0);return_error = BR_FAILED_REPLY;goto err_bad_call_stack;}while (tmp) {if (tmp->from && tmp->from->proc == target_proc)target_thread = tmp->from;tmp = tmp->from_parent;}}}//3、如果找到了目标进程中的最优空闲binder线程 target_thread,则设定相应队列//target_list = &target_thread->todo;//target_wait = &target_thread->wait;//以便于 将一个与 BC_TRANSACTION 返回协议相关的工作项,加入到目标todo队列//通过 target_wait 将目标进程、或者线程唤醒来处理这个工作项//if (target_thread) {e->to_thread = target_thread->pid;target_list = &target_thread->todo;target_wait = &target_thread->wait;} else {target_list = &target_proc->todo;target_wait = &target_proc->wait;}e->to_proc = target_proc->pid;/* TODO: reuse incoming transaction for reply */t = kzalloc(sizeof(*t), GFP_KERNEL);if (t == NULL) {return_error = BR_FAILED_REPLY;goto err_alloc_t_failed;}binder_stats_created(BINDER_STAT_TRANSACTION);tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);if (tcomplete == NULL) {return_error = BR_FAILED_REPLY;goto err_alloc_tcomplete_failed;}binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);t->debug_id = ++binder_last_id;e->debug_id = t->debug_id;if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;t->sender_euid = proc->tsk->cred->euid;t->to_proc = target_proc;t->to_thread = target_thread;t->code = tr->code;t->flags = tr->flags;t->priority = task_nice(current);trace_binder_transaction(reply, t, target_node);t->buffer = binder_alloc_buf(target_proc, tr->data_size,tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));if (t->buffer == NULL) {return_error = BR_FAILED_REPLY;goto err_binder_alloc_buf_failed;}t->buffer->allow_user_free = 0;t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;trace_binder_transaction_alloc_buf(t->buffer);if (target_node)binder_inc_node(target_node, 1, 0, NULL);offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {binder_user_error("binder: %d:%d got transaction with invalid ""data ptr\n", proc->pid, thread->pid);return_error = BR_FAILED_REPLY;goto err_copy_data_failed;}if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {binder_user_error("binder: %d:%d got transaction with invalid ""offsets ptr\n", proc->pid, thread->pid);return_error = BR_FAILED_REPLY;goto err_copy_data_failed;}if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) {binder_user_error("binder: %d:%d got transaction with ""invalid offsets size, %zd\n",proc->pid, thread->pid, tr->offsets_size);return_error = BR_FAILED_REPLY;goto err_bad_offset;}off_end = (void *)offp + tr->offsets_size;for (; offp < off_end; offp++) {struct flat_binder_object *fp;if (*offp > t->buffer->data_size - sizeof(*fp) ||t->buffer->data_size < sizeof(*fp) ||!IS_ALIGNED(*offp, sizeof(void *))) {binder_user_error("binder: %d:%d got transaction with ""invalid offset, %zd\n",proc->pid, thread->pid, *offp);return_error = BR_FAILED_REPLY;goto err_bad_offset;}fp = (struct flat_binder_object *)(t->buffer->data + *offp);switch (fp->type) {case BINDER_TYPE_BINDER:case BINDER_TYPE_WEAK_BINDER: {struct binder_ref *ref;struct binder_node *node = binder_get_node(proc, fp->binder);if (node == NULL) {node = binder_new_node(proc, fp->binder, fp->cookie);if (node == NULL) {return_error = BR_FAILED_REPLY;goto err_binder_new_node_failed;}node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);}if (fp->cookie != node->cookie) {binder_user_error("binder: %d:%d sending u%p ""node %d, cookie mismatch %p != %p\n",proc->pid, thread->pid,fp->binder, node->debug_id,fp->cookie, node->cookie);goto err_binder_get_ref_for_node_failed;}if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {return_error = BR_FAILED_REPLY;goto err_binder_get_ref_for_node_failed;}ref = binder_get_ref_for_node(target_proc, node);if (ref == NULL) {return_error = BR_FAILED_REPLY;goto err_binder_get_ref_for_node_failed;}if (fp->type == BINDER_TYPE_BINDER)fp->type = BINDER_TYPE_HANDLE;elsefp->type = BINDER_TYPE_WEAK_HANDLE;fp->handle = ref->desc;binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,   &thread->todo);trace_binder_transaction_node_to_ref(t, node, ref);binder_debug(BINDER_DEBUG_TRANSACTION, "        node %d u%p -> ref %d desc %d\n", node->debug_id, node->ptr, ref->debug_id, ref->desc);} break;case BINDER_TYPE_HANDLE:case BINDER_TYPE_WEAK_HANDLE: {struct binder_ref *ref = binder_get_ref(proc, fp->handle);if (ref == NULL) {binder_user_error("binder: %d:%d got ""transaction with invalid ""handle, %ld\n", proc->pid,thread->pid, fp->handle);return_error = BR_FAILED_REPLY;goto err_binder_get_ref_failed;}if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {return_error = BR_FAILED_REPLY;goto err_binder_get_ref_failed;}if (ref->node->proc == target_proc) {if (fp->type == BINDER_TYPE_HANDLE)fp->type = BINDER_TYPE_BINDER;elsefp->type = BINDER_TYPE_WEAK_BINDER;fp->binder = ref->node->ptr;fp->cookie = ref->node->cookie;binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);trace_binder_transaction_ref_to_node(t, ref);binder_debug(BINDER_DEBUG_TRANSACTION, "        ref %d desc %d -> node %d u%p\n", ref->debug_id, ref->desc, ref->node->debug_id, ref->node->ptr);} else {struct binder_ref *new_ref;new_ref = binder_get_ref_for_node(target_proc, ref->node);if (new_ref == NULL) {return_error = BR_FAILED_REPLY;goto err_binder_get_ref_for_node_failed;}fp->handle = new_ref->desc;binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);trace_binder_transaction_ref_to_ref(t, ref,new_ref);binder_debug(BINDER_DEBUG_TRANSACTION, "        ref %d desc %d -> ref %d desc %d (node %d)\n", ref->debug_id, ref->desc, new_ref->debug_id, new_ref->desc, ref->node->debug_id);}} break;case BINDER_TYPE_FD: {int target_fd;struct file *file;if (reply) {if (!(in_reply_to->flags & TF_ACCEPT_FDS)) {binder_user_error("binder: %d:%d got reply with fd, %ld, but target does not allow fds\n",proc->pid, thread->pid, fp->handle);return_error = BR_FAILED_REPLY;goto err_fd_not_allowed;}} else if (!target_node->accept_fds) {binder_user_error("binder: %d:%d got transaction with fd, %ld, but target does not allow fds\n",proc->pid, thread->pid, fp->handle);return_error = BR_FAILED_REPLY;goto err_fd_not_allowed;}file = fget(fp->handle);if (file == NULL) {binder_user_error("binder: %d:%d got transaction with invalid fd, %ld\n",proc->pid, thread->pid, fp->handle);return_error = BR_FAILED_REPLY;goto err_fget_failed;}if (security_binder_transfer_file(proc->tsk, target_proc->tsk, file) < 0) {fput(file);return_error = BR_FAILED_REPLY;goto err_get_unused_fd_failed;}target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);if (target_fd < 0) {fput(file);return_error = BR_FAILED_REPLY;goto err_get_unused_fd_failed;}task_fd_install(target_proc, target_fd, file);trace_binder_transaction_fd(t, fp->handle, target_fd);binder_debug(BINDER_DEBUG_TRANSACTION, "        fd %ld -> %d\n", fp->handle, target_fd);/* TODO: fput? */fp->handle = target_fd;} break;default:binder_user_error("binder: %d:%d got transactio""n with invalid object type, %lx\n",proc->pid, thread->pid, fp->type);return_error = BR_FAILED_REPLY;goto err_bad_object_type;}}if (reply) {BUG_ON(t->buffer->async_transaction != 0);binder_pop_transaction(target_thread, in_reply_to);} else if (!(t->flags & TF_ONE_WAY)) {BUG_ON(t->buffer->async_transaction != 0);t->need_reply = 1;t->from_parent = thread->transaction_stack;thread->transaction_stack = t;} else {BUG_ON(target_node == NULL);BUG_ON(t->buffer->async_transaction != 1);if (target_node->has_async_transaction) {target_list = &target_node->async_todo;target_wait = NULL;} elsetarget_node->has_async_transaction = 1;}t->work.type = BINDER_WORK_TRANSACTION;list_add_tail(&t->work.entry, target_list);tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;list_add_tail(&tcomplete->entry, &thread->todo);if (target_wait)wake_up_interruptible(target_wait);return;err_get_unused_fd_failed:err_fget_failed:err_fd_not_allowed:err_binder_get_ref_for_node_failed:err_binder_get_ref_failed:err_binder_new_node_failed:err_bad_object_type:err_bad_offset:err_copy_data_failed:trace_binder_transaction_failed_buffer_release(t->buffer);binder_transaction_buffer_release(target_proc, t->buffer, offp);t->buffer->transaction = NULL;binder_free_buf(target_proc, t->buffer);err_binder_alloc_buf_failed:kfree(tcomplete);binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);err_alloc_tcomplete_failed:kfree(t);binder_stats_deleted(BINDER_STAT_TRANSACTION);err_alloc_t_failed:err_bad_call_stack:err_empty_call_stack:err_dead_binder:err_invalid_target_handle:err_no_context_mgr_node:binder_debug(BINDER_DEBUG_FAILED_TRANSACTION, "binder: %d:%d transaction failed %d, size %zd-%zd\n", proc->pid, thread->pid, return_error, tr->data_size, tr->offsets_size);{struct binder_transaction_log_entry *fe;fe = binder_transaction_log_add(&binder_transaction_log_failed);*fe = *e;}BUG_ON(thread->return_error != BR_OK);if (in_reply_to) {thread->return_error = BR_TRANSACTION_COMPLETE;binder_send_failed_reply(in_reply_to, return_error);} elsethread->return_error = return_error;}}}五、java层-ServiceManager-架构\SM代理对象获取 {1、 代理对象【 ServiceManagerProxy 】\服务对象【 BinderProxy 】 的继承关系{//ServiceManager java代理对象ServiceManagerProxy -----> IServiceManager -----> IInterface|||--IBinder mRemote; //此对象,实际上指向 BinderProxy||//此对象用来描述 java服务对象BinderProxy————> IBinder||--int mObject;//指向C++层的binder代理对象//java层服务代理对象,与 C++层binder代理对象之间的 对应关系{ServiceManagerProxy.mRemote//得到 BinderProxy 对象BinderProxy.mObject//关联有 C++层的binder代理对象}}2、 SM管理类架构:android.os.Binder.ServiceManager////存储有唯一的java代理对象 ServiceManagerProxy//使用 java本地对象 ServiceManagerNative,来实现服务//{1、 ServiceManager.cls 的继承关系ServiceManager||||包含|||---- static IServiceManager sServiceManager;//实际上指向 ServiceManagerProxy|||使用|继承实现ServiceManagerNative ----------------------> Binder ————> IBinder || ||----private int mObject;//指向 C++中binder本地对象 || | | |实现继承 |————————————> IServiceManager --------> IInterface //说明1)、 Binder相当于 C++ 中的 BBinder1)、 ServiceManagerNative相当于 BnServiceManager用来实现java层中 ServiceManager 服务2)、 ServiceManagerProxy相当于 BpServiceManager} 3、 JNI 层实现//-//base/core/jni/android_util_Binder.cpp{1、重要的数据结构static struct bindernative_offsets_t{jclass mClass;//指向 java层的 Binder 类jmethodID mExecTransact;//指向 java层的 Binder.execTransactjfieldID mObject;//指向 java层的 Binder.mObject} gBinderOffsets;初始化点:int_register_android_os_Binderstatic struct binderproxy_offsets_t{// Class state.jclass mClass;//指向 java层的 BinderProxy 类jmethodID mConstructor;//BinderProxy.构造函数jmethodID mSendDeathNotice;//BinderProxy.static.sendDeathNoticejfieldID mObject;//BinderProxy.mObjectjfieldID mSelf;//BinderProxy.mSelfjfieldID mOrgue;//BinderProxy.mOrgue} gBinderProxyOffsets;初始化点:int_register_android_os_BinderProxy}4、【 ServiceManagerProxy\BinderProxy | ServiceManagerNative 】定义{//base/core/java/android/os/IServiceManager.javapublic interface IServiceManager extends IInterface{public IBinder getService(String name) throws RemoteException;public IBinder checkService(String name) throws RemoteException;public void addService(String name, IBinder service, boolean allowIsolated)throws RemoteException;public String[] listServices() throws RemoteException;public void setPermissionController(IPermissionController controller)throws RemoteException;static final String descriptor = "android.os.IServiceManager";int GET_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION;int CHECK_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+1;int ADD_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+2;int LIST_SERVICES_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+3;int CHECK_SERVICES_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+4;int SET_PERMISSION_CONTROLLER_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+5;}//base/core/java/android/os/ServiceManagerNative.javaclass ServiceManagerProxy implements IServiceManager {private IBinder mRemote;}//base/core/java/android/os/Binder.java:public class Binder implements IBinder{}final class BinderProxy implements IBinder {final private WeakReference mSelf;private int mObject;private int mOrgue;}}5、 SM代理对象的创建过程//对象:ServiceManager.sServiceManager:ServiceManagerProxy//调用方法:ServiceManager.getIServiceManager{1、若首次获取 IServiceManager,则须////1.1、获取句柄=0的代理对象[SM-BpBinder]对应的 java 服务代理对象[JAVA-SM-BinderProxy]//call:JNI-BinderInternal.getContextObject()////1.1.1 获取 SM-BpBinder//call:val=ProcessState::self()->getContextObject(NULL);////1.1.2、创建[JAVA-SM-BinderProxy] : javaObjectForIBinder(env, val);////1、判断 val 是指向 Binder代理对象【 BpBinder 】\ Binder本地对象【 JavaBBinder 】//= false,则为代理对象//= true,则为 本地对象,则//返回java层的 Binder 对象////2、若传递的是 Binder-代理对象,则去创建  BinderProxy 对象//首先调用 val->findObject(),检查:当前进程之前是否已经创建了一个 BinderProxy 对象////2.1、是则: 返回 BinderProxy对象.WeakReference 对象,然后//1)、检查弱引用对象是否依然有效//2)、若有效,则直接返回//3)、若已经失效,则须调用 val->detachObject(&gBinderProxyOffsets); 解除val与一个无效的// BinderProxy对象 的对应关系////2.2、否则: 调用java层的 BinderProxy 类的构造函数,为 [SM-BpBinder],创建 [JAVA-SM-BinderProxy]//1)、调用 env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);//2)、关联//[JAVA-SM-BinderProxy].mObject=[SM-BpBinder]的地址//3)、增加 [SM-BpBinder] 的强引用计数【凡是被java代理对象引用过的,都要增加强引用计数】//4)、为 BinderProxy.mSelf 指向的弱引用对象 创建强引用对象, 与 binder-val 关联起来////综上://[JAVA-SM-BinderProxy].mObject 存储有[SM-BpBinder]的地址//[SM-BpBinder].mObjects<>存储有 “ BinderProxy.mSelf 指向的弱引用对象”的强引用//下一次,再调用 javaObjectForIBinder 时,就会直接返回 binder-val.mObjects<id>////1.2、obj.queryLocalInterface(descriptor); 返回NULL,因此需要用 BinderProxy 对象,创建 ServiceManagerProxy////1.3、用 BinderProxy 对象,创建 ServiceManagerProxy://new ServiceManagerProxy(obj);//ServiceManagerProxy.mRemote = BinderProxy//1.4、将 ServiceManagerProxy 对象转换成 IServiceManager,存储到 ServiceManager::sServiceManager 中//ServiceManager.getIServiceManager{if (sServiceManager != null) {return sServiceManager;}// Find the service managersServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());{//android.os.ServiceManagerNative.java:IBinder objif (obj == null) {return null;}//1.2、IServiceManager in = (IServiceManager)obj.queryLocalInterface(descriptor);//"android.os.IServiceManager"if (in != null) {return in;}return new ServiceManagerProxy(obj);{mRemote = remote;}}return sServiceManager;}1.1、获取 java 服务代理对象:BinderProxyBinderInternal.getContextObject(){//base/core/jni/android_util_Binder.cpp://android_os_BinderInternal_getContextObject////1.1.1 获取句柄=0的Binde代理对象,即为 BpBinder 对象sp<IBinder> b = ProcessState::self()->getContextObject(NULL);//1.1.2、创建 java 服务代理对象:BinderProxyreturn javaObjectForIBinder(env, b);}1.1.2、创建 java 服务代理对象:BinderProxy////-//base/core/jni/android_util_Binder.cpp://javaObjectForIBinder(env, b);//JNIEnv* env, const sp<IBinder>& val{if (val == NULL) return NULL;1、判断 val 是指向 Binder代理对象?还是 Binder本地对象【 JavaBBinder 】//= false,则为代理对象//= true,则为 本地对象,则//返回java层的 Binder 对象//if (val->checkSubclass(&gBinderOffsets)) {// One of our own!jobject object = static_cast<JavaBBinder*>(val.get())->object();{}LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);return object;}2、若传递的是 Binder 代理对象,则执行到此,去创建  BinderProxy 对象//对于 ServiceManager.getIServiceManager 调用,为true//// For the rest of the function we will hold this lock, to serialize// looking/creation of Java proxies for native Binder proxies.AutoMutex _l(mProxyLock);2.1、调用 val->findObject(),检查:当前进程之前是否已经创建了一个 BinderProxy对象//是则: 返回 BinderProxy对象.WeakReference 对象,然后//1)、检查弱引用对象是否依然有效//2)、若有效,则直接返回//3)、若已经失效,则须调用 val->detachObject(&gBinderProxyOffsets); 接触val与一个无效的// BinderProxy对象 的对应关系//// Someone else's...  do we know about it?jobject object = (jobject)val->findObject(&gBinderProxyOffsets);{//BpBinder.cppreturn mObjects.find(objectID);}if (object != NULL) {jobject res = jniGetReferent(env, object);if (res != NULL) {ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);return res;}LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());android_atomic_dec(&gNumProxyRefs);val->detachObject(&gBinderProxyOffsets);env->DeleteGlobalRef(object);}2.2、调用java层的 BinderProxy 类的构造函数,为 binder代理对象,创建 java服务代理对象-BinderProxy对象//1)、调用 env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);//2)、保存 binder 代理对象-val 的地址,到 BinderProxy.mObject//3)、增加 binder 代理对象-val 的强引用计数【凡是被java代理对象引用过的,都要增加强引用计数】//4)、为 BinderProxy.mSelf 指向的弱引用对象 创建强引用对象, 与 binder-val 关联起来////综上://BinderProxy.mObject 存储有 binder 代理对象的地址//binder-val.mObjects<>存储有 “BinderProxy.mSelf 指向的弱引用对象”的强引用//下一次,再调用 javaObjectForIBinder 时,就会直接返回 binder-val.mObjects<id>object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);if (object != NULL) {LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);// The proxy holds a reference to the native object.env->SetIntField(object, gBinderProxyOffsets.mObject, (int)val.get());val->incStrong((void*)javaObjectForIBinder);//4)、// The native object needs to hold a weak reference back to the// proxy, so we can retrieve the same proxy if it is still active.jobject refObject = env->NewGlobalRef(env->GetObjectField(object, gBinderProxyOffsets.mSelf));val->attachObject(&gBinderProxyOffsets, refObject,jnienv_to_javavm(env), proxy_cleanup);{//BpBinder.cppmObjects.attach(objectID, object, cleanupCookie, func);{entry_t e;e.object = object;e.cleanupCookie = cleanupCookie;e.func = func;mObjects.add(objectID, e);}}// Also remember the death recipients registered on this proxysp<DeathRecipientList> drl = new DeathRecipientList;drl->incStrong((void*)javaObjectForIBinder);env->SetIntField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jint>(drl.get()));// Note that a new object reference has been created.android_atomic_inc(&gNumProxyRefs);incRefsCreated(env);}return object;}}}六、JAVA服务的创建、注册、获取过程: 以 PowerManagerService 为实例{0、 PowerManagerService 的定义{public interface IPowerManager extends android.os.IInterface{public static abstract class Stub extends android.os.Binder implements android.os.IPowerManager{private static class Proxy implements android.os.IPowerManager{}}}public final class PowerManagerService extends IPowerManager.Stub{}}1、在system_server进程中创建服务://调用Binder(),创建C++-JavaBBinderHolder 对象-jbh,存储到 PMS.Binder.mObject{ServerThread.initAndLoop{power = new PowerManagerService();        ServiceManager.addService(Context.POWER_SERVICE, power);}1、首先调用父类构造函数Binder//1、创建 C++层中的 JavaBBinderHolder 对象:jbh//2、增加 jbh 的强引用计数//3、将 jbh 地址,存储到 PowerManagerService.Binder.mObject 中Binder() {        init();{//android_util_Binder.cpp:://android_os_Binder_init(JNIEnv* env, jobject obj)//参数 obj 指向 PowerManagerService 对象//1、创建 c++层中的 JavaBBinderHolder 对象:jbhJavaBBinderHolder* jbh = new JavaBBinderHolder();if (jbh == NULL) {jniThrowException(env, "java/lang/OutOfMemoryError", NULL);return;}//2、增加 jbh 的强引用计数ALOGV("Java Binder %p: acquiring first ref on holder %p", obj, jbh);jbh->incStrong((void*)android_os_Binder_init);//3、将 jbh 地址,存储到 PowerManagerService.Binder.mObject 中env->SetIntField(obj, gBinderOffsets.mObject, (int)jbh);}    }2、public IPowerManager.Stub(){this.attachInterface(this, DESCRIPTOR);{//Binder.javamOwner = owner;//PowerManagerService.thismDescriptor = descriptor;//"android.os.IPowerManager"}}new PowerManagerService(){        synchronized (mLock) {            mWakeLockSuspendBlocker = createSuspendBlockerLocked("PowerManagerService.WakeLocks");            mDisplaySuspendBlocker = createSuspendBlockerLocked("PowerManagerService.Display");            mDisplaySuspendBlocker.acquire();            mHoldingDisplaySuspendBlocker = true;            mScreenOnBlocker = new ScreenOnBlockerImpl();            mDisplayBlanker = new DisplayBlankerImpl();            mWakefulness = WAKEFULNESS_AWAKE;        }        nativeInit();{//com_android_server_power_PowerManagerService.cppgPowerManagerServiceObj = env->NewGlobalRef(obj);status_t err = hw_get_module(POWER_HARDWARE_MODULE_ID,(hw_module_t const**)&gPowerModule);if (!err) {gPowerModule->init(gPowerModule);} else {ALOGE("Couldn't load %s module (%s)", POWER_HARDWARE_MODULE_ID, strerror(-err));}}        nativeSetPowerState(true, true);    }}2、注册Java服务: ServiceManager.addService(Context.POWER_SERVICE, power);{//获取 与service对应的Binder本地对象[JavaBBinder:jbh.mBinder],将其注册到SM进程////0、获取[JAVA-SM-BinderProxy],即为 ServiceManager.sServiceManager:ServiceManagerProxy//Call:ServiceManager::getIServiceManager()////1、调用 ServiceManagerProxy.addService//ServiceManager.addService(Context.POWER_SERVICE, power);{getIServiceManager().addService(name, service, false);{//ServiceManagerNative.java:ServiceManagerProxy.cls//String name, IBinder service, boolean allowIsolated//1、创建 JAVA-Parcel 对象//data:封装进程间通信数据//reply:封装进程间通信结果Parcel data = Parcel.obtain();Parcel reply = Parcel.obtain();//2、写入通信数据到 JAVA-Parcel--data//通信头"android.os.IPowerManager"//java服务名称:Context.POWER_SERVICE//java服务data.writeInterfaceToken(IServiceManager.descriptor);//data.writeString(name);//3、获取 与service对应的Binder本地对象[],并将其写入到 [CC-Parcel--data] 对象里////1)、获得与 data 对应的、运行在C++层的 Parcel 对象:[CC-Parcel--data]//reinterpret_cast<Parcel*>(nativePtr);////2)、获取: 与service对应的Binder本地对象:调用 ibinderForJavaObject(env, object_val) //2).1、如果-object_val 指向 java服务对象,则//1)、首先获取 java服务对象.mObject, 此成员是 Binder.init()过程申请的  JavaBBinderHolder 对象-jbh////2)、再调用 jbh->get(env, obj),获取 与service对应的Binder本地对象--JavaBBinder:jbh.mBinder,//A、初始化时,jbh.mBinder==NULL,因此需要新建 JavaBBinder[继承了 BBinder]//jbh.mBinder.mObject= object_val//java服务对象-PowerManagerService//B、返回 jbh.mBinder////2).2、如果 object_val 指向java代理对象,则直接返回 object_val.mObject////3)、再将 与service对应的Binder本地对象,写入到 parcel 中//parcel->writeStrongBinder(ibinderForJavaObject(env, object));//data.writeStrongBinder(service);{//Parcel.java:IBinder valnativeWriteStrongBinder(mNativePtr, val);{//android_os_Parcel.cpp:://JNIEnv* env, jclass clazz, jint nativePtr, jobject object//1、获得与 data 对应的、运行在C++层的 Parcel 对象:parcelParcel* parcel = reinterpret_cast<Parcel*>(nativePtr);if (parcel != NULL) {const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));{//2、调用 ibinderForJavaObject(env, object) 获取: 与service对应的Binder本地对象ibinderForJavaObject(env, object){//android_util_Binder.cpp//JNIEnv* env, jobject objif (obj == NULL) return NULL;//2.1、如果参数 object 指向 java服务对象,则首先拿到 java服务对象.mObject 成员//此成员是 Binder.init()过程申请的  JavaBBinderHolder 对象-jbh////再调用 jbh->get(env, obj),来获取Binder本地对象--JavaBBinder:jbh.mBinder,//1)、初始化时,jbh.mBinder==NULL,因此需要新建 JavaBBinder[继承了 BBinder]//1)、保存 java服务对象 到 jbh.mBinder.mObject//2)、返回 jbh.mBinderif (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {JavaBBinderHolder* jbh = (JavaBBinderHolder*) env->GetIntField(obj, gBinderOffsets.mObject);return jbh != NULL ? jbh->get(env, obj) : NULL;{//jbh->get(env, obj)AutoMutex _l(mLock);sp<JavaBBinder> b = mBinder.promote();if (b == NULL) {b = new JavaBBinder(env, obj);{mVM(jnienv_to_javavm(env))mObject(env->NewGlobalRef(object))}mBinder = b;ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%d\n", b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());}return b;}}//2.2、如果参数 object 指向java 代理对象,则直接返回 java代理对象.mObject 成员if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {return (IBinder*)env->GetIntField(obj, gBinderProxyOffsets.mObject);}ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);return NULL;}//3、再将 java层服务对应的Binder本地对象,写入到 parcel 中parcel->writeStrongBinder(ibinderForJavaObject(env, object));}if (err != NO_ERROR) {signalExceptionForError(env, clazz, err);}}}}data.writeInt(allowIsolated ? 1 : 0);//4、调用 [JAVA-SM-BinderProxy].mRemote.transact 请求 ServiceManager,将java服务注册到其中,参数信息携带在 data 中//mRemote 指向 句柄值=0的java服务代理对象 BinderProxy//mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);//BinderProxy.transact 是 native 接口{//android_util_Binder.cpp---android_os_BinderProxy_transact//JNIEnv* env, jobject obj, jint code, jobject dataObj, jobject replyObj, jint flags//Parcel* data = parcelForJavaObject(env, dataObj);Parcel* reply = parcelForJavaObject(env, replyObj);//获取[SM-BpBinder],即为[sServiceManager.mRemote.mObject]//即句柄=0的binder代理对象[SM-BpBinder]//IBinder* target = (IBinder*) env->GetIntField(obj, gBinderProxyOffsets.mObject);//将 ADD_SERVICE_TRANSACTION 传递给 SM 进程//2、调用 [SM-BpBinder].transact 接口,发送 ADD_SERVICE_TRANSACTION 请求//-//native/libs/binder/BpBinder.cpp//调用当前线程的 IPCThreadState::transact,向binder驱动程序发送指令//参数 BpBinder::mHandle,是描述该binder代理对象的句柄值,当前代理对象句柄为0//1)、检查 data 中存储的进程间通信数据是否正确//2)、flags 置位 TF_ACCEPT_FDS 表示允许server进程在返回结果中携带 FD//3)、调用 IPCThreadState.writeTransactionData 将 data 写入到 binder_transaction_data 结构体//4)、调用 IPCThreadState.waitForResponse 向驱动发送 BC_TRANSACTION 命令协议status_t err = target->transact(code, *data, reply, flags);//注意最后一个参数默认为0{//native/libs/binder/BpBinder.cpp//uint32_t code, const Parcel& data, Parcel* reply, uint32_t flagsif (mAlive) {status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);{//IPCThreadState.cpp/*int32_t handle,uint32_t code, const Parcel& data,Parcel*reply, uint32_t flags *///检查 data 中存储的进程间通信数据是否正确status_t err = data.errorCheck();//flags 置位 TF_ACCEPT_FDS 表示允许server进程在返回结果中携带 FDflags |= TF_ACCEPT_FDS;IF_LOG_TRANSACTIONS() {TextOutput::Bundle _b(alog);alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "<< handle << " / code " << TypeCode(code) << ": "<< indent << data << dedent << endl;}//调用 writeTransactionData 将 data 写入到 binder_transaction_data 结构体if (err == NO_ERROR) {LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);}if (err != NO_ERROR) {if (reply) reply->setError(err);return (mLastError = err);}if ((flags & TF_ONE_WAY) == 0) {if (reply) {err = waitForResponse(reply);} else {Parcel fakeReply;err = waitForResponse(&fakeReply);}IF_LOG_TRANSACTIONS() {TextOutput::Bundle _b(alog);alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "<< handle << ": ";if (reply) alog << indent << *reply << dedent << endl;else alog << "(none requested)" << endl;}} else {err = waitForResponse(NULL, NULL);}return err;}if (status == DEAD_OBJECT) mAlive = 0;return status;}return DEAD_OBJECT;}}reply.recycle();data.recycle();}}}3、获取java服务: ServiceManager.getService(POWER_SERVICE);{mPowerManager = (PowerManager) mContext.getSystemService(Context.POWER_SERVICE);{//android.app.ContextImpl$ServiceFetchercreateService(ContextImpl ctx) {//1、 获取 POWER_SERVICE 对应的 [JAVA-BinderProxy]IBinder b = ServiceManager.getService(POWER_SERVICE);//2、 将 [JAVA-BinderProxy] 转换成IPowerManager service = IPowerManager.Stub.asInterface(b);return new PowerManager(ctx.getOuterContext(),service, ctx.mMainThread.getHandler());}}1、ServiceManager.getService(POWER_SERVICE);////ServiceManager.getService(POWER_SERVICE);{IBinder service = sCache.get(name);if (service != null) {return service;} else {//1.1 return getIServiceManager().getService(name);{//ServiceManager::sServiceManager.getService}}}1.1、ServiceManager::sServiceManager.getService(name);////调用 [JAVA-SM-BinderProxy].mRemote.transact 请求 SM,获取 POWER_SERVICE 对应的 [JAVA-BinderProxy]//获取 Parcel-reply 在C++层对应的 parcel//首先调用 parcel->readStrongBinder() 读出 BpBinder-对象//然后调用 javaObjectForIBinder(env,BpBinder-对象) 将其转换成 [JAVA-BinderProxy]//[JAVA-SM-BinderProxy].mObject 存储有[SM-BpBinder]的地址//[SM-BpBinder].mObjects<>存储有 “ BinderProxy.mSelf 指向的弱引用对象”的强引用//下一次,再调用 javaObjectForIBinder 时,就会直接返回 binder-val.mObjects<id>ServiceManager::sServiceManager.getService(name);{//ServiceManagerNative.java--ServiceManagerProxy.cls        Parcel data = Parcel.obtain();        Parcel reply = Parcel.obtain();        data.writeInterfaceToken(IServiceManager.descriptor);        data.writeString(name);////调用 [JAVA-SM-BinderProxy].mRemote.transact 请求 SM,获取 POWER_SERVICE 对应的 [JAVA-BinderProxy]        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);//        IBinder binder = reply.readStrongBinder();{//Parcel.java:nativeReadStrongBinder(mNativePtr);{//android_os_Parcel.cpp:://android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jint nativePtr)//获取 Parcel-reply 在C++层对应的 parcelParcel* parcel = reinterpret_cast<Parcel*>(nativePtr);if (parcel != NULL) {//首先调用 parcel->readStrongBinder() 读出 BpBinder-对象//然后调用 javaObjectForIBinder(env,BpBinder-对象) 将其转换成 [JAVA-BinderProxy]return javaObjectForIBinder(env, parcel->readStrongBinder());{//1、 parcel->readStrongBinder(){sp<IBinder> val;unflatten_binder(ProcessState::self(), *this, &val);return val;}//2、 javaObjectForIBinder(env,BpBinder-对象)}}return NULL;}}        reply.recycle();        data.recycle();        return binder;}2、 将 [JAVA-BinderProxy] 转换成 IPowerManager.Stub.ProxyIPowerManager service = IPowerManager.Stub.asInterface(b);{if ((obj==null)) {return null;}android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);if (((iin!=null)&&(iin instanceof android.os.IPowerManager))) {return ((android.os.IPowerManager)iin);}return new android.os.IPowerManager.Stub.Proxy(obj);{//IPowerManager.Stub.Proxy.clsmRemote = remote;}}}4、使用java服务: 通过PMS查看是否亮屏{mPowerManager.isScreenOn{return mService.isScreenOn();//IPowerManager.Stub.Proxy.isScreenOn{android.os.Parcel _data = android.os.Parcel.obtain();android.os.Parcel _reply = android.os.Parcel.obtain();boolean _result;try {_data.writeInterfaceToken(DESCRIPTOR);//4、调用 [JAVA-BinderProxy].mRemote.transact 请求 POWER_SERVICE 执行 Stub.TRANSACTION_isScreenOnmRemote.transact(Stub.TRANSACTION_isScreenOn, _data, _reply, 0);//BinderProxy.transact 是 native 接口_reply.readException();_result = (0!=_reply.readInt());}finally {_reply.recycle();_data.recycle();}return _result;}}2、 与-POWER_SERVICE-对应的Binder本地对象[JavaBBinder:jbh.mBinder] 响应 Stub.TRANSACTION_isScreenOn 请求////Binder驱动程序 将来自“[JAVA-BinderProxy].mRemote.transact 的请求”传递到 JavaBBinder:jbh.mBinder////调用 [JavaBBinder:jbh.mBinder].mObject.execTransact 方法//[JavaBBinder:jbh.mBinder].mObject存储有 java服务对象-PowerManagerService//因此,执行到此处,即为执行 PowerManagerService.execTransact//JavaBBinder:jbh.mBinder.onTransact(){//android_util_Binder.cpp/*uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0 */        JNIEnv* env = javavm_to_jnienv(mVM);        ALOGV("onTransact() on %p calling object %p in env %p vm %p\n", this, mObject, env, mVM);        IPCThreadState* thread_state = IPCThreadState::self();        const int strict_policy_before = thread_state->getStrictModePolicy();        thread_state->setLastTransactionBinderFlags(flags);//调用 [JavaBBinder:jbh.mBinder].mObject.execTransact 方法//[JavaBBinder:jbh.mBinder].mObject存储有 java服务对象-PowerManagerService//因此,执行到此处,即为执行 PowerManagerService.//        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,            code, (int32_t)&data, (int32_t)reply, flags);        jthrowable excep = env->ExceptionOccurred();        if (excep) {            report_exception(env, excep,                "*** Uncaught remote exception!  "                "(Exceptions are not yet supported across processes.)");            res = JNI_FALSE;            /* clean up JNI local ref -- we don't return to Java code */            env->DeleteLocalRef(excep);        }        // Restore the Java binder thread's state if it changed while        // processing a call (as it would if the Parcel's header had a        // new policy mask and Parcel.enforceInterface() changed        // it...)        const int strict_policy_after = thread_state->getStrictModePolicy();        if (strict_policy_after != strict_policy_before) {            // Our thread-local...            thread_state->setStrictModePolicy(strict_policy_before);            // And the Java-level thread-local...            set_dalvik_blockguard_policy(env, strict_policy_before);        }        jthrowable excep2 = env->ExceptionOccurred();        if (excep2) {            report_exception(env, excep2,                "*** Uncaught exception in onBinderStrictModePolicyChange");            /* clean up JNI local ref -- we don't return to Java code */            env->DeleteLocalRef(excep2);        }        // Need to always call through the native implementation of        // SYSPROPS_TRANSACTION.        if (code == SYSPROPS_TRANSACTION) {            BBinder::onTransact(code, data, reply, flags);        }        //aout << "onTransact to Java code; result=" << res << endl        //    << "Transact from " << this << " to Java code returning "        //    << reply << ": " << *reply << endl;        return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;}3、 执行 PowerManagerService.execTransactPowerManagerService.execTransact()//实际上执行 Binder.execTransact{//Binder.java//int code, int dataObj, int replyObj,int flagsParcel data = Parcel.obtain(dataObj);Parcel reply = Parcel.obtain(replyObj);boolean res;//执行由IPowerManager.Stub重写的 onTransactres = onTransact(code, data, reply, flags);reply.recycle();data.recycle();return res;}4、 执行由IPowerManager.Stub重写的 onTransact//case TRANSACTION_isScreenOn://this.isScreenOn();//IPowerManager.Stub.onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException{switch (code){case TRANSACTION_isScreenOn:{data.enforceInterface(DESCRIPTOR);boolean _result = this.isScreenOn();reply.writeNoException();reply.writeInt(((_result)?(1):(0)));return true;}}return super.onTransact(code, data, reply, flags);}5、 PowerManagerService.isScreenOn()PowerManagerService.isScreenOn(){}}}七、APP中使用 ServiceManager 代理对象【BpServiceManager】 注册 service组件{}八、Binder与AIDL{1、跨进程调用Service AIDL.远程Service{1、android的远程service 机制定义远程接口: 即为 Service 代理对象 IBinder为远程接口提供一个实现类:即为 IBinder 的实现类。获取 IBinder 代理对象之后,即可通过它去回调远程Service的属性或方法。2、与本地service的区别绑定本地Service时,Service.onBind 返回 IBinder 对象本身。绑定远程Service时,返回的是 IBinder 对象的代理。3、使用 AIDL 语言定义远程接口【进程间通信接口】接口源文件后缀:MyServiceAidl.aidl数据类型引用需要导入pkg,基本类型【基本数据类型\String\List\Map\CharSequence】除外。Service\Client 需要使用 aidl.exe 为新定义的接口提供实现。若采用ADT进行开发,则ADT会自动生成实现JAVA接口 /gen/pkg/MyServiceAidl.java4、Stub 内部类MyServiceAidl.java 接口会包含:Stub 内部类。其实现了 IBinder\MyServiceAidl 两个接口。此类可作为 Service.onBind()的返回值。5、AIDL实例{//-------------------定义service端//////1、定义AIDL接口src\org\crazyit\service/ICat.aidl//package org.crazyit.service;interface ICat{String getColor();double getWeight();}////2、ADT自动生成的 gen\org\crazyit\service\ICat.java//定义如下结构//接口 ICat//内部类 Stub,其实现有 org.crazyit.service.ICatpackage org.crazyit.service;public interface ICat extends android.os.IInterface{/** Local-side IPC implementation stub class. */public static abstract class Stub extends android.os.Binder implements org.crazyit.service.ICat{private static final java.lang.String DESCRIPTOR = "org.crazyit.service.ICat";/** Construct the stub at attach it to the interface. */public Stub(){this.attachInterface(this, DESCRIPTOR);}/** * Cast an IBinder object into an org.crazyit.service.ICat interface, * generating a proxy if needed. */public static org.crazyit.service.ICat asInterface(android.os.IBinder obj){if ((obj==null)) {return null;}android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);if (((iin!=null)&&(iin instanceof org.crazyit.service.ICat))) {return ((org.crazyit.service.ICat)iin);}return new org.crazyit.service.ICat.Stub.Proxy(obj);}@Override public android.os.IBinder asBinder(){return this;}@Override public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException{switch (code){case INTERFACE_TRANSACTION:{reply.writeString(DESCRIPTOR);return true;}case TRANSACTION_getColor:{data.enforceInterface(DESCRIPTOR);java.lang.String _result = this.getColor();reply.writeNoException();reply.writeString(_result);return true;}case TRANSACTION_getWeight:{data.enforceInterface(DESCRIPTOR);double _result = this.getWeight();reply.writeNoException();reply.writeDouble(_result);return true;}}return super.onTransact(code, data, reply, flags);}private static class Proxy implements org.crazyit.service.ICat{private android.os.IBinder mRemote;Proxy(android.os.IBinder remote){mRemote = remote;}@Override public android.os.IBinder asBinder(){return mRemote;}public java.lang.String getInterfaceDescriptor(){return DESCRIPTOR;}@Override public java.lang.String getColor() throws android.os.RemoteException{android.os.Parcel _data = android.os.Parcel.obtain();android.os.Parcel _reply = android.os.Parcel.obtain();java.lang.String _result;try {_data.writeInterfaceToken(DESCRIPTOR);mRemote.transact(Stub.TRANSACTION_getColor, _data, _reply, 0);_reply.readException();_result = _reply.readString();}finally {_reply.recycle();_data.recycle();}return _result;}@Override public double getWeight() throws android.os.RemoteException{android.os.Parcel _data = android.os.Parcel.obtain();android.os.Parcel _reply = android.os.Parcel.obtain();double _result;try {_data.writeInterfaceToken(DESCRIPTOR);mRemote.transact(Stub.TRANSACTION_getWeight, _data, _reply, 0);_reply.readException();_result = _reply.readDouble();}finally {_reply.recycle();_data.recycle();}return _result;}}static final int TRANSACTION_getColor = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0);static final int TRANSACTION_getWeight = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);}public java.lang.String getColor() throws android.os.RemoteException;public double getWeight() throws android.os.RemoteException;}//3、定义 Service实现类  AidlService,继承自Service//包含成员//内部类 CatBinder,继承自Stub,也就是实现了 ICat\IBinder 接口//接口 onBind,返回本地 catBinder 对象package org.crazyit.service;public class AidlService extends Service{private CatBinder catBinder;Timer timer = new Timer();String[] colors = new String[]{"红色","黄色","黑色"};double[] weights = new double[]{2.3,3.1,1.58};private String color;private double weight;// 继承 Stub ,也就是实现额 ICat 接口,并实现了IBinder接口public class CatBinder extends Stub{@Overridepublic String getColor() throws RemoteException{return color;}@Overridepublic double getWeight() throws RemoteException{return weight;}}@Overridepublic void onCreate(){super.onCreate();catBinder = new CatBinder();timer.schedule(new TimerTask(){@Overridepublic void run(){// 随机地改变Service组件内color、weight属性的值。int rand = (int)(Math.random() * 3);color = colors[rand];weight = weights[rand];System.out.println("--------" + rand);}} , 0 , 800);}@Overridepublic IBinder onBind(Intent arg0){/* 返回catBinder对象 * 在绑定本地Service的情况下,该catBinder对象会直接 * 传给客户端的ServiceConnection对象 * 的onServiceConnected方法的第二个参数; * 在绑定远程Service的情况下,只将catBinder对象的代理 * 传给客户端的ServiceConnection对象 * 的onServiceConnected方法的第二个参数; */return catBinder; //①}@Overridepublic void onDestroy(){timer.cancel();}}//4、在 AndroidManifest.xml 中配置{<applicationandroid:icon="@drawable/ic_launcher"android:label="@string/app_name"><!-- 定义一个Service组件 --><service android:name=".AidlService"><intent-filter><action android:name="org.crazyit.aidl.action.AIDL_SERVICE" /></intent-filter></service></application>}//-------------------定义Client端//////1、copy aidl 文件到客户端应用中 src\org\crazyit\service/ICat.aidl//2、使用ADT自动生成 gen\org\crazyit\service\ICat.java//3、在客户端Activity中创建 ServiceConnection 对象//以 ServiceConnection 对象作为参数,调用 bindService 绑定远程 Service//在 ServiceConnection.onServiceConnected()中要将 返回的 IBinder 转换成代理对象如下://catService = ICat.Stub.asInterface(service);package org.crazyit.client;import org.crazyit.service.ICat;public class AidlClient extends Activity{private ICat catService;private Button get;EditText color, weight;private ServiceConnection conn = new ServiceConnection(){@Overridepublic void onServiceConnected(ComponentName name,IBinder service){// 获取远程Service的onBind方法返回的对象的代理catService = ICat.Stub.asInterface(service);}@Overridepublic void onServiceDisconnected(ComponentName name){catService = null;}};@Overridepublic void onCreate(Bundle savedInstanceState){super.onCreate(savedInstanceState);setContentView(R.layout.main);get = (Button) findViewById(R.id.get);color = (EditText) findViewById(R.id.color);weight = (EditText) findViewById(R.id.weight);// 创建所需绑定的Service的IntentIntent intent = new Intent();intent.setAction("org.crazyit.aidl.action.AIDL_SERVICE");// 绑定远程ServicebindService(intent, conn, Service.BIND_AUTO_CREATE);get.setOnClickListener(new OnClickListener(){@Overridepublic void onClick(View arg0){try{// 获取、并显示远程Service的状态color.setText(catService.getColor());weight.setText(catService.getWeight() + "");}catch (RemoteException e){e.printStackTrace();}}});}@Overridepublic void onDestroy(){super.onDestroy();// 解除绑定this.unbindService(conn);}}}}2、跨进程传递 自定义类型数据的 AIDL Service{1、自定义类型数据传输Service 必备限制条件数据类型调用远程Service的参数类型远程调用的返回值类型数据序列化:Android要求上述两种类型,都必须支持序列化;必须实现 Parcelable 接口,即在实现类中须添加如下:1)、添加一个静态成员,名为CREATOR,该对象实现了Parcelable.Creator接口public static final Parcelable.Creator<Person> CREATOR2)、实现Parcelable接口必须实现的方法:int describeContents()writeToParcel(Parcel dest, int flags)//把该对象所包含的数据写到ParcelAIDL定义要使用AIDL代码来定义自定义类型2、使用 AIDL 定义 自定义类型//src/org/crazyit/service/Pet.aidlparcelable Pet; //parcelable Person; 3、定义 实现有 Parcelable 接口的自定义类//src/org/crazyit/service/Person.javapackage org.crazyit.service;public class Person implements Parcelable{private Integer id;private String name;private String pass;public Person(){}public Person(Integer id, String name, String pass){super();this.id = id;this.name = name;this.pass = pass;}public Integer getId(){return id;}public void setId(Integer id){this.id = id;}public String getName(){return name;}public void setName(String name){this.name = name;}public String getPass(){return pass;}public void setPass(String pass){this.pass = pass;}@Overridepublic int hashCode(){final int prime = 31;int result = 1;result = prime * result + ((name == null) ? 0 : name.hashCode());result = prime * result + ((pass == null) ? 0 : pass.hashCode());return result;}@Overridepublic boolean equals(Object obj){if (this == obj)return true;if (obj == null)return false;if (getClass() != obj.getClass())return false;Person other = (Person) obj;if (name == null){if (other.name != null)return false;}else if (!name.equals(other.name))return false;if (pass == null){if (other.pass != null)return false;}else if (!pass.equals(other.pass))return false;return true;}// 实现Parcelable接口必须实现的方法@Overridepublic int describeContents(){return 0;}// 实现Parcelable接口必须实现的方法@Overridepublic void writeToParcel(Parcel dest, int flags){//把该对象所包含的数据写到Parceldest.writeInt(id);dest.writeString(name);dest.writeString(pass);}// 添加一个静态成员,名为CREATOR,该对象实现了Parcelable.Creator接口public static final Parcelable.Creator<Person> CREATOR= new Parcelable.Creator<Person>() //①{@Overridepublic Person createFromParcel(Parcel source){// 从Parcel中读取数据,返回Person对象return new Person(source.readInt(), source.readString(), source.readString());}@Overridepublic Person[] newArray(int size){return new Person[size];}};}//src/org/crazyit/service/Pet.javapublic class Pet implements Parcelable{}4、使用AIDL定义通信接口 src/org/crazyit/service/IPet.aidl{import org.crazyit.service.Pet;import org.crazyit.service.Person;interface IPet{// 定义一个Person对象作为传入参数List<Pet> getPets(in Person owner);}}5、定义Service类,其onBind()返回 IPet 实现类的实例{package org.crazyit.service;import org.crazyit.service.IPet.Stub;public class ComplexService extends Service{private PetBinder petBinder;private static Map<Person , List<Pet>> pets= new HashMap<Person , List<Pet>>();static{// 初始化pets Map集合ArrayList<Pet> list1 = new ArrayList<Pet>();list1.add(new Pet("旺财" , 4.3));list1.add(new Pet("来福" , 5.1));pets.put(new Person(1, "sun" , "sun") , list1);ArrayList<Pet> list2 = new ArrayList<Pet>();list2.add(new Pet("kitty" , 2.3));list2.add(new Pet("garfield" , 3.1));pets.put(new Person(2, "bai" , "bai") , list2);}// 继承Stub,也就是实现额IPet接口,并实现了IBinder接口public class PetBinder extends Stub{@Overridepublic List<Pet> getPets(Person owner) throws RemoteException{// 返回Service内部的数据return pets.get(owner);}}@Overridepublic void onCreate(){super.onCreate();petBinder = new PetBinder();}@Overridepublic IBinder onBind(Intent arg0){/* 返回catBinder对象 * 在绑定本地Service的情况下,该catBinder对象会直接 * 传给客户端的ServiceConnection对象 * 的onServiceConnected方法的第二个参数; * 在绑定远程Service的情况下,只将catBinder对象的代理 * 传给客户端的ServiceConnection对象 * 的onServiceConnected方法的第二个参数; */return petBinder; //①}@Overridepublic void onDestroy(){}}}6、客户端创建 ServiceConnection 类实例//在其 onServiceConnected()中获取 ComplexService.onBind 返回的代理对象//之后执行数据交互{package org.crazyit.client;import org.crazyit.service.IPet;import org.crazyit.service.Person;import org.crazyit.service.Pet;public class ComplexClient extends Activity{private IPet petService;private Button get;EditText personView;ListView showView;private ServiceConnection conn = new ServiceConnection(){@Overridepublic void onServiceConnected(ComponentName name, IBinder service){// 获取远程Service的onBind方法返回的对象的代理petService = IPet.Stub.asInterface(service);}@Overridepublic void onServiceDisconnected(ComponentName name){petService = null;}};@Overridepublic void onCreate(Bundle savedInstanceState){super.onCreate(savedInstanceState);setContentView(R.layout.main);personView = (EditText) findViewById(R.id.person);showView = (ListView) findViewById(R.id.show);get = (Button) findViewById(R.id.get);// 创建所需绑定的Service的IntentIntent intent = new Intent();intent.setAction("org.crazyit.aidl.action.COMPLEX_SERVICE");// 绑定远程ServicebindService(intent, conn, Service.BIND_AUTO_CREATE);get.setOnClickListener(new OnClickListener(){@Overridepublic void onClick(View arg0){try{String personName = personView.getText().toString();// 调用远程Service的方法List<Pet> pets = petService.getPets(new Person(1,personName, personName)); //①// 将程序返回的List包装成ArrayAdapterArrayAdapter<Pet> adapter = new ArrayAdapter<Pet>(ComplexClient.this,android.R.layout.simple_list_item_1, pets);showView.setAdapter(adapter);}catch (RemoteException e){e.printStackTrace();}}});}@Overridepublic void onDestroy(){super.onDestroy();// 解除绑定this.unbindService(conn);}}}}}


0 0
原创粉丝点击