看看写写 之 Android上Ethernet和netd那点事
来源:互联网 发布:车铣复合手工编程教程 编辑:程序博客网 时间:2024/05/29 02:25
陆陆续续接触Android已经一年有余了,记得最初开始接触的是Honeycomb版本,当时那种青涩啊,那种看到sp<>,看到aidl时的惶恐还记忆犹新。。
算了,快写成抒情文了
我喜欢在PC上跑Android(效果还不是一般的好),最近因为要调试一些网络的东西,所以需要联网
三种主流选择:
1是WIFI
2是mobile
3是Ethernet
Mobile当然没有了,没有选择WIFI是因为X86上需要WIFI设备,还需要热点于是想用Ethernet在网上搜了下Android ethernet的方案,都貌似是froyo上面的,以前的版本貌似还没有ethernet支持,大神们做了很多工作
在新的代码上搜了一遍,居然找到了EthernetDataTracker.java这个文件
(说明一下,笔者所用的代码都是谷歌的原生Android代码,版本为4.1)
http://source.android.com/source/downloading.html
#repo init -u https://android.googlesource.com/platform/manifest -b android-4.1.1_r4
#repo sync
回到正题
有这个文件至少说明谷歌是针对了以太网的,所以可能不需要移植什么的(到底是不是呢?惶恐啊)总得找个切入点
我用鼠标点了下桌面上的默认的google search
看了看后面的Logcat,看到了一个小清新类:GoogleSuggestClient
又发现这个类有个巨牛X的方法:isNetworkConnected()
再到ConnectivityManager.getActiveNetworkInfo()一看又是那套service模式了
getActiveNetworkInfo说明系统可能有很多网络连接,但只能使用一个(要么以太网要么手机网络要么wifi)
继续跟踪到service吧
ConnectivityService.java
public NetworkInfo getActiveNetworkInfo() { enforceAccessPermission(); final int uid = Binder.getCallingUid(); return getNetworkInfo(mActiveDefaultNetwork, uid); } private NetworkInfo getNetworkInfo(int networkType, int uid) { NetworkInfo info = null; if (isNetworkTypeValid(networkType)) { final NetworkStateTracker tracker = mNetTrackers[networkType]; if (tracker != null) { info = getFilteredNetworkInfo(tracker, uid); } } return info; }这里有一组NetworkStateTracker,想想肯定是一组可用的连接了
还有一个mActiveDefaultNetwork,这一定是默认连接了
一开始打印了下返回去的NetworkInfo,居然是个Null,这也难怪,我一个PC上面WIFI,MOBILE网络都没有嘛
但是我的以太网是好的(用ubuntu可以上网),这说明上面的framework可能和下面没有接上。
想法设法证实这一点吧
入口点当然是这组NetworkStateTracker是在哪初始化的
第一个想到的当然是ConnectivityService的构造了
至于构造又是在哪调用的,应该是大名鼎鼎的SystemServer了
看看构造
ConnectivityService.java
String[] raStrings = context.getResources().getStringArray( com.android.internal.R.array.radioAttributes); for (String raString : raStrings) { RadioAttributes r = new RadioAttributes(raString); ...... mRadioAttributes[r.mType] = r; } String[] naStrings = context.getResources().getStringArray( com.android.internal.R.array.networkAttributes); for (String naString : naStrings) { try { NetworkConfig n = new NetworkConfig(naString); ...... mNetConfigs[n.type] = n; mNetworksDefined++; } catch(Exception e) { // ignore it - leave the entry null } }
for (int netType : mPriorityList) { switch (mNetConfigs[netType].radio) { case ConnectivityManager.TYPE_WIFI: mNetTrackers[netType] = new WifiStateTracker(netType, mNetConfigs[netType].name); mNetTrackers[netType].startMonitoring(context, mHandler); break; case ConnectivityManager.TYPE_MOBILE: mNetTrackers[netType] = new MobileDataStateTracker(netType, mNetConfigs[netType].name); mNetTrackers[netType].startMonitoring(context, mHandler); break; ...... case ConnectivityManager.TYPE_ETHERNET: mNetTrackers[netType] = EthernetDataTracker.getInstance(); mNetTrackers[netType].startMonitoring(context, mHandler); break; default: loge("Trying to create a DataStateTracker for an unknown radio type " + mNetConfigs[netType].radio); continue; } mCurrentLinkProperties[netType] = null; if (mNetTrackers[netType] != null && mNetConfigs[netType].isDefault()) { mNetTrackers[netType].reconnect(); } }这里先从xml读出来两个属性
com.android.internal.R.array.radioAttributes
com.android.internal.R.array.networkAttributes
分别保存到
mRadioAttributes和mNetConfigs两个数组里面
后面的初始化就靠这两个东西了
找到xml文件
/frameworks/base/core/res/res/values/config.xml
<string-array translatable="false" name="networkAttributes"> <item>"wifi,1,1,1,-1,true"</item> <item>"mobile,0,0,0,-1,true"</item> <item>"mobile_mms,2,0,2,60000,true"</item> <item>"mobile_supl,3,0,2,60000,true"</item> <item>"mobile_hipri,5,0,3,60000,true"</item> <item>"mobile_fota,10,0,2,60000,true"</item> <item>"mobile_ims,11,0,2,60000,true"</item> <item>"mobile_cbs,12,0,2,60000,true"</item> <item>"wifi_p2p,13,1,0,-1,true"</item> </string-array>
<string-array translatable="false" name="radioAttributes"> <item>"1,1"</item> <item>"0,1"</item> </string-array>
看到上面那个没,貌似只有wifi和mobile的定义,简直无视了以太网
赶紧补上,
networkAttributes:
<item>"eth,9,9,4,60000,true"</item>之所以这么补两个9是因为后面初始化的时候是这么弄的,后面的4是优先级,再后面两个参数就没这么管了,照着前面写
radioAttributes
<item>"9,1"</item>
看这个分支:
case ConnectivityManager.TYPE_ETHERNET: mNetTrackers[netType] = EthernetDataTracker.getInstance(); mNetTrackers[netType].startMonitoring(context, mHandler); break;
继续追踪到startMonitoring
EthernetDataTracker.java
public void startMonitoring(Context context, Handler target) { mContext = context; mCsHandler = target; // register for notifications from NetworkManagement Service IBinder b = ServiceManager.getService(Context.NETWORKMANAGEMENT_SERVICE); mNMService = INetworkManagementService.Stub.asInterface(b); mInterfaceObserver = new InterfaceObserver(this); sIfaceMatch = context.getResources().getString( com.android.internal.R.string.config_ethernet_iface_regex); try { final String[] ifaces = mNMService.listInterfaces(); for (String iface : ifaces) { if (iface.matches(sIfaceMatch)) { mIface = iface; mNMService.setInterfaceUp(iface); InterfaceConfiguration config = mNMService.getInterfaceConfig(iface); mLinkUp = config.isActive(); if (config != null && mHwAddr == null) { mHwAddr = config.getHardwareAddress(); if (mHwAddr != null) { mNetworkInfo.setExtraInfo(mHwAddr); } } reconnect(); break; } } } catch (RemoteException e) { Log.e(TAG, "Could not get list of interfaces " + e); } }
这里先取出一个xml属性
com.android.internal.R.string.config_ethernet_iface_regex
放到sIfaceMatch中,这个一看就是个匹配字符串
这里匹配的是所有eth\\d,查了下JAVA的正则表达式,就是eth0,eth1什么的
mNMService.listInterfaces()
这个什么NMService不是尼玛Service,而是NetworkManagerService
显然有call到一个管网络连接的Service去了
跟踪到NetworkManagerService.java
public String[] listInterfaces() { mContext.enforceCallingOrSelfPermission(CONNECTIVITY_INTERNAL, TAG); try { return NativeDaemonEvent.filterMessageList( mConnector.executeForList("interface", "list"), InterfaceListResult); } catch (NativeDaemonConnectorException e) { throw e.rethrowAsParcelableException(); } }
mConnector:NativeDaemonConnector
继续跟踪:
public NativeDaemonEvent[] execute(int timeout, String cmd, Object... args) throws NativeDaemonConnectorException { final ArrayList<NativeDaemonEvent> events = Lists.newArrayList(); final int sequenceNumber = mSequenceNumber.incrementAndGet(); final StringBuilder cmdBuilder = new StringBuilder(Integer.toString(sequenceNumber)).append(' '); final long startTime = SystemClock.elapsedRealtime(); makeCommand(cmdBuilder, cmd, args); final String sentCmd = cmdBuilder.toString(); /* logCmd + \0 */ synchronized (mDaemonLock) { if (mOutputStream == null) { throw new NativeDaemonConnectorException("missing output stream"); } else { try { mOutputStream.write(sentCmd.getBytes(Charsets.UTF_8)); } catch (IOException e) { throw new NativeDaemonConnectorException("problem sending command", e); } } }......
}参数cmd 就是"list"
参数args就是"interface"
这里把两个字符串组成了一个一定格式的command
最后发送出去:
mOutputStream.write(sentCmd.getBytes(Charsets.UTF_8));
mOutputStream是哪来的啊。。命令到底发送到哪里去了??
只好跟踪mOutputStream
private void listenToSocket() throws IOException { LocalSocket socket = null; try { socket = new LocalSocket(); LocalSocketAddress address = new LocalSocketAddress(mSocket, LocalSocketAddress.Namespace.RESERVED); socket.connect(address);
InputStream inputStream = socket.getInputStream(); synchronized (mDaemonLock) { mOutputStream = socket.getOutputStream(); }
原来NativeDaemonConnector会去启动一个线程,这个线程首先会call到native去初始化一个socket
android_net_LocalSocketImpl.cpp
static jobjectsocket_create (JNIEnv *env, jobject object, jboolean stream){ int ret; ret = socket(PF_LOCAL, stream ? SOCK_STREAM : SOCK_DGRAM, 0); if (ret < 0) { jniThrowIOException(env, errno); return NULL; } return jniCreateFileDescriptor(env,ret);}有了socket当然会去connect,这里有个address,它的初始化的第一个参数mSocket是一个字符串
这个mSocket是NativeDaemonConnector初始化的时候就有了的
NativeDaemonConnector是NetworkManagerService的小弟
private NetworkManagementService(Context context) { mContext = context; mConnector = new NativeDaemonConnector( new NetdCallbackReceiver(), "netd", 10, NETD_TAG, 160); mThread = new Thread(mConnector, NETD_TAG); }那个"netd"就是mSocket了
再看看connect函数,烂七八糟的一直会call到JNI去
static voidsocket_connect_local(JNIEnv *env, jobject object, jobject fileDescriptor, jstring name, jint namespaceId){ int ret; const char *nameUtf8; int fd; nameUtf8 = env->GetStringUTFChars(name, NULL); fd = jniGetFDFromFileDescriptor(env, fileDescriptor); ret = socket_local_client_connect( fd, nameUtf8, namespaceId, SOCK_STREAM); env->ReleaseStringUTFChars(name, nameUtf8);}
int socket_local_client_connect(int fd, const char *name, int namespaceId, int type){ struct sockaddr_un addr; socklen_t alen; size_t namelen; int err; err = socket_make_sockaddr_un(name, namespaceId, &addr, &alen); if (err < 0) { goto error; } if(connect(fd, (struct sockaddr *) &addr, alen) < 0) { goto error; } return fd;error: return -1;}
int socket_make_sockaddr_un(const char *name, int namespaceId, struct sockaddr_un *p_addr, socklen_t *alen){ memset (p_addr, 0, sizeof (*p_addr)); size_t namelen; switch (namespaceId) { case ANDROID_SOCKET_NAMESPACE_RESERVED: namelen = strlen(name) + strlen(ANDROID_RESERVED_SOCKET_PREFIX); /* unix_path_max appears to be missing on linux */ if (namelen > sizeof(*p_addr) - offsetof(struct sockaddr_un, sun_path) - 1) { goto error; } strcpy(p_addr->sun_path, ANDROID_RESERVED_SOCKET_PREFIX); strcat(p_addr->sun_path, name); break; default: // invalid namespace id return -1; } p_addr->sun_family = AF_LOCAL; *alen = namelen + offsetof(struct sockaddr_un, sun_path) + 1; return 0;error: return -1;}
这里使用的是UNIX域的socket,在最后的socket_make_sockaddr_un函数中
参数name便是"netd"
ANDROID_RESERVED_SOCKET_PREFIX是"/dev/socket"
所以合起来的socket地址是"/dev/socket/netd"
之前的送的"list interface"就是送到这个socket去了,同时也发现,这个NativeDaemonConnector也在不停的倾听这个socket,了解那边发生了什么
在这里猜到,“那边”有个东西可以接收我们的命令,然后返回结果
"那边"的这个东西也可能主动向我们汇报一些事件
神奇的那边
我又搜索了一下代码
发现那边就是netd这个守护进程。。
哎 第一次认真写博客,发现还不是一般的累
netd下次再写吧。。搞几把DOTA。。
向所有的原创bloger致敬!
凡是有始有终 继续写吧
netd = net daemon
目的是为了监视网络状况,比如带宽变化,网络设备的增加/移除
netd时候在init执行的时候被启动的
看看init.rc有这么一段:
service netd /system/bin/netd
class main
socket netd stream 0660 root system
socket dnsproxyd stream 0660 root system
socket mdns stream 0660 root system
init程序解释执行这一段时会执行service_start
void service_start(struct service *svc, const char *dynamic_args){ ... ... NOTICE("starting '%s'\n", svc->name); pid = fork(); if (pid == 0) { struct socketinfo *si; struct svcenvinfo *ei; char tmp[32]; int fd, sz; umask(077); for (si = svc->sockets; si; si = si->next) { int socket_type = ( !strcmp(si->type, "stream") ? SOCK_STREAM : (!strcmp(si->type, "dgram") ? SOCK_DGRAM : SOCK_SEQPACKET)); int s = create_socket(si->name, socket_type, si->perm, si->uid, si->gid); if (s >= 0) { publish_socket(si->name, s); } } setpgid(0, getpid()); /* as requested, set our gid, supplemental gids, and uid */ if (!dynamic_args) { if (execve(svc->args[0], (char**) svc->args, (char**) ENV) < 0) { ERROR("cannot execve('%s'): %s\n", svc->args[0], strerror(errno)); } } else { char *arg_ptrs[INIT_PARSER_MAXARGS+1]; int arg_idx = svc->nargs; char *tmp = strdup(dynamic_args); char *next = tmp; char *bword; /* Copy the static arguments */ memcpy(arg_ptrs, svc->args, (svc->nargs * sizeof(char *))); while((bword = strsep(&next, " "))) { arg_ptrs[arg_idx++] = bword; if (arg_idx == INIT_PARSER_MAXARGS) break; } arg_ptrs[arg_idx] = '\0'; execve(svc->args[0], (char**) arg_ptrs, (char**) ENV); } _exit(127); } ...... if (properties_inited()) notify_service_state(svc->name, "running");}
首先会fork一个子进程,然后创建相应的socket
socket netd stream 0660 root system
表示创建一个"/dev/socket/netd"这样一个Unix域的socket,这正好和前面的NativeDaemonConnector的socket对应
貌似有点眉目了
初始化好之后就是exec家族的系统调用了,这里的是/system/bin/netd
看看main函数吧
/system/netd/main.cpp
int main() { CommandListener *cl; NetlinkManager *nm; DnsProxyListener *dpl; MDnsSdListener *mdnsl; if (!(nm = NetlinkManager::Instance())) { ALOGE("Unable to create NetlinkManager"); exit(1); }; cl = new CommandListener(); nm->setBroadcaster((SocketListener *) cl); if (nm->start()) { ALOGE("Unable to start NetlinkManager (%s)", strerror(errno)); exit(1); } dpl = new DnsProxyListener(); if (dpl->startListener()) { ALOGE("Unable to start DnsProxyListener (%s)", strerror(errno)); exit(1); } mdnsl = new MDnsSdListener(); if (mdnsl->startListener()) { ALOGE("Unable to start MDnsSdListener (%s)", strerror(errno)); exit(1); } if (cl->startListener()) { ALOGE("Unable to start CommandListener (%s)", strerror(errno)); exit(1); } // Eventually we'll become the monitoring thread while(1) { sleep(1000); } ALOGI("Netd exiting"); exit(0);
先实例化一个NetlinkManager
Netlink。。貌似是跟内核打交道的
看看定义
class NetlinkManager {private: static NetlinkManager *sInstance;private: SocketListener *mBroadcaster; NetlinkHandler *mUeventHandler; NetlinkHandler *mRouteHandler; NetlinkHandler *mQuotaHandler; NetlinkHandler *mIfaceIdleTimerHandler; int mUeventSock; int mRouteSock; int mQuotaSock; int mIfaceIdleTimerSock;
}单例模式
定义很明显了,有一个SocketListener
SocketListener,看来就是Socket的server端。
然后定义了四个handler和四个socket
看到uevent几乎可以确定肯定是和内核相关了。
继续看Main吧
cl = new CommandListener();
class CommandListener : public FrameworkListener;
class FrameworkListener : public SocketListener;
根据继承关系CommandListener其实就是一个SocketListener
CommandListener构造简直就是一对bull shit
CommandListener::CommandListener() : FrameworkListener("netd", true) { registerCmd(new InterfaceCmd()); registerCmd(new IpFwdCmd()); ...... if (!sSecondaryTableCtrl) sSecondaryTableCtrl = new SecondaryTableController(); if (!sTetherCtrl) sTetherCtrl = new TetherController(); ......}
太不科学了,貌似是注册了一大堆cmd,放在mCommands里面
看看父类FrameworkListener:
FrameworkListener::FrameworkListener(const char *socketName, bool withSeq) : SocketListener(socketName, true, withSeq) { init(socketName, withSeq);}
简洁多了,就是先创建一个SocketListener然后init
SocketListener::SocketListener(const char *socketName, bool listen) { init(socketName, -1, listen, false);}void SocketListener::init(const char *socketName, int socketFd, bool listen, bool useCmdNum) { mListen = listen; mSocketName = socketName; mSock = socketFd; mUseCmdNum = useCmdNum; pthread_mutex_init(&mClientsLock, NULL); mClients = new SocketClientCollection();}
貌似也没干什么,就是把netd这个名字保存在mSocketName中,然后建立了一个mClients的集合
void FrameworkListener::init(const char *socketName, bool withSeq) { mCommands = new FrameworkCommandCollection(); errorRate = 0; mCommandCount = 0; mWithSeq = withSeq;}
这个就是创建cmd集合
其实整个结构也比较清晰,就是初始化了一个Unix域的一个名叫“netd”的socket,这个socket负责接收client(也就是NativeDaemonConnector)发来的消息
怎么处理消息呢,就定义了一组cmd,不通的消息交给不同的cmd处理,然后把结果返回给client。
回到main
nm->setBroadcaster((SocketListener *) cl);
这个没什么好说的,就是把NetlinkManager和CommandListener连系起来,一起搅基啊,怎么搅呢?
这是因为如果底下网络设备有变化的话,比如设备增加,带宽变化,当然不能等client一直主动询问有没啥变化。
这就多了个这个叫Broadcaster的东西,有个玩意儿(NetlinkHandler)不停的轮询内核事件,一有消息就去找Broadcaster
Broadcaster实质就是client连过来的fd嘛。
nm->start();
int NetlinkManager::start() { if ((mUeventHandler = setupSocket(&mUeventSock, NETLINK_KOBJECT_UEVENT, 0xffffffff, NetlinkListener::NETLINK_FORMAT_ASCII)) == NULL) { return -1; } if ((mRouteHandler = setupSocket(&mRouteSock, NETLINK_ROUTE, RTMGRP_LINK, NetlinkListener::NETLINK_FORMAT_BINARY)) == NULL) { return -1; } if ((mQuotaHandler = setupSocket(&mQuotaSock, NETLINK_NFLOG, NFLOG_QUOTA_GROUP, NetlinkListener::NETLINK_FORMAT_BINARY)) == NULL) { ALOGE("Unable to open quota2 logging socket"); // TODO: return -1 once the emulator gets a new kernel. } return 0;}要start了,一副关键代码的样子
这里分别用setupSocket初始化了三个handler,随便跟进去一个吧:
NetlinkHandler *NetlinkManager::setupSocket(int *sock, int netlinkFamily, int groups, int format) { struct sockaddr_nl nladdr; int sz = 64 * 1024; int on = 1; memset(&nladdr, 0, sizeof(nladdr)); nladdr.nl_family = AF_NETLINK; nladdr.nl_pid = getpid(); nladdr.nl_groups = groups; if ((*sock = socket(PF_NETLINK, SOCK_DGRAM, netlinkFamily)) < 0) { ALOGE("Unable to create netlink socket: %s", strerror(errno)); return NULL; } if (bind(*sock, (struct sockaddr *) &nladdr, sizeof(nladdr)) < 0) { ALOGE("Unable to bind netlink socket: %s", strerror(errno)); close(*sock); return NULL; } NetlinkHandler *handler = new NetlinkHandler(this, *sock, format); if (handler->start()) { ALOGE("Unable to start NetlinkHandler: %s", strerror(errno)); close(*sock); return NULL; } return handler;}终于看到socket系统调用了,这里是个PF_NETLINK, family 是 NETLINK_KOBJECT_UEVENT
这个是啥意思俺也不大明白,大致就是内核有NETLINK_KOBJECT_UEVENT这种事件就回报上来
后面又是bind系统调用,怎么不见listen??
后面新建了一个NetlinkHandler
这个NetlinkHandler继承NetlinkListener
NetlinkListener又继承SocketListener
啊。。又是SocketListener
很明显,它要倾听来自内核的声音。。内核就像一个client,发送数据给它,只不过这些都是在kernel里面实现的吧。。什么原理也不清楚
看看handler->start()
一直会call到SocketListener的start方法:
int SocketListener::startListener() { if (mListen && listen(mSock, 4) < 0) { SLOGE("Unable to listen on socket (%s)", strerror(errno)); return -1; } else if (!mListen) mClients->push_back(new SocketClient(mSock, false, mUseCmdNum)); if (pipe(mCtrlPipe)) { SLOGE("pipe failed (%s)", strerror(errno)); return -1; } if (pthread_create(&mThread, NULL, SocketListener::threadStart, this)) { SLOGE("pthread_create (%s)", strerror(errno)); return -1; } return 0;}
跟进去SocketListener的threadStart方法,最后会call到
void SocketListener::runListener() { SocketClientCollection *pendingList = new SocketClientCollection(); while(1) { SocketClientCollection::iterator it; fd_set read_fds; int rc = 0; int max = -1; FD_ZERO(&read_fds); if (mListen) { max = mSock; FD_SET(mSock, &read_fds); } FD_SET(mCtrlPipe[0], &read_fds); if (mCtrlPipe[0] > max) max = mCtrlPipe[0]; pthread_mutex_lock(&mClientsLock); for (it = mClients->begin(); it != mClients->end(); ++it) { int fd = (*it)->getSocket(); FD_SET(fd, &read_fds); if (fd > max) max = fd; } pthread_mutex_unlock(&mClientsLock); SLOGV("mListen=%d, max=%d, mSocketName=%s", mListen, max, mSocketName); if ((rc = select(max + 1, &read_fds, NULL, NULL, NULL)) < 0) { if (errno == EINTR) continue; SLOGE("select failed (%s) mListen=%d, max=%d", strerror(errno), mListen, max); sleep(1); continue; } else if (!rc) continue; if (FD_ISSET(mCtrlPipe[0], &read_fds)) break; if (mListen && FD_ISSET(mSock, &read_fds)) { struct sockaddr addr; socklen_t alen; int c; do { alen = sizeof(addr); c = accept(mSock, &addr, &alen); SLOGV("%s got %d from accept", mSocketName, c); } while (c < 0 && errno == EINTR); if (c < 0) { SLOGE("accept failed (%s)", strerror(errno)); sleep(1); continue; } pthread_mutex_lock(&mClientsLock); mClients->push_back(new SocketClient(c, true, mUseCmdNum)); pthread_mutex_unlock(&mClientsLock); } /* Add all active clients to the pending list first */ pendingList->clear(); pthread_mutex_lock(&mClientsLock); for (it = mClients->begin(); it != mClients->end(); ++it) { int fd = (*it)->getSocket(); if (FD_ISSET(fd, &read_fds)) { pendingList->push_back(*it); } } pthread_mutex_unlock(&mClientsLock); /* Process the pending list, since it is owned by the thread, * there is no need to lock it */ while (!pendingList->empty()) { /* Pop the first item from the list */ it = pendingList->begin(); SocketClient* c = *it; pendingList->erase(it); /* Process it, if false is returned and our sockets are * connection-based, remove and destroy it */ if (!onDataAvailable(c) && mListen) { /* Remove the client from our array */ SLOGV("going to zap %d for %s", c->getSocket(), mSocketName); pthread_mutex_lock(&mClientsLock); for (it = mClients->begin(); it != mClients->end(); ++it) { if (*it == c) { mClients->erase(it); break; } } pthread_mutex_unlock(&mClientsLock); /* Remove our reference to the client */ c->decRef(); } } } delete pendingList;}这个函数真实复杂无比。。。我的内心实在承受不了
就看了几个关键:
rc = select(max + 1, &read_fds, NULL, NULL, NULL);
c = accept(mSock, &addr, &alen);
mClients->push_back(new SocketClient(c, true, mUseCmdNum));
onDataAvailable(c);
select的fd集合就是内核事件
accpet之后有个onDataAvailable
跟进去!
bool NetlinkListener::onDataAvailable(SocketClient *cli){ int socket = cli->getSocket(); ssize_t count; uid_t uid = -1; count = TEMP_FAILURE_RETRY(uevent_kernel_multicast_uid_recv( socket, mBuffer, sizeof(mBuffer), &uid)); if (count < 0) { if (uid > 0) LOG_EVENT_INT(65537, uid); SLOGE("recvmsg failed (%s)", strerror(errno)); return false; } NetlinkEvent *evt = new NetlinkEvent(); if (!evt->decode(mBuffer, count, mFormat)) { SLOGE("Error decoding NetlinkEvent"); } else { onEvent(evt); } delete evt; return true;}
既然内核已经通知有事件了,就靠uevent_kernel_multicast_uid_recv先读出来
然后decode,不用管怎么个decode,反正就是一定格式嘛
最后onEvent
函数也很长,就取前一点点吧
void NetlinkHandler::onEvent(NetlinkEvent *evt) { const char *subsys = evt->getSubsystem(); if (!subsys) { ALOGW("No subsystem found in netlink event"); return; } if (!strcmp(subsys, "net")) { int action = evt->getAction(); const char *iface = evt->findParam("INTERFACE"); if (action == evt->NlActionAdd) { notifyInterfaceAdded(iface);
void NetlinkHandler::notifyInterfaceAdded(const char *name) { char msg[255]; snprintf(msg, sizeof(msg), "Iface added %s", name); mNm->getBroadcaster()->sendBroadcast(ResponseCode::InterfaceChange, msg, false);}
开始找Broadcaster了,就是前面的CommandListener
这样下去就把事件传给了Framework,整个过程都是内核事件触发的
那framework过来的command又是怎么处理的呢?
继续看main.cpp
cl->startListener()
这个其实和上面是一个道理了
只不过onDataAvailable变成了这个:
bool FrameworkListener::onDataAvailable(SocketClient *c) { char buffer[255]; int len; len = TEMP_FAILURE_RETRY(read(c->getSocket(), buffer, sizeof(buffer))); if (len < 0) { SLOGE("read() failed (%s)", strerror(errno)); return false; } else if (!len) return false; int offset = 0; int i; for (i = 0; i < len; i++) { if (buffer[i] == '\0') { /* IMPORTANT: dispatchCommand() expects a zero-terminated string */ dispatchCommand(c, buffer + offset); offset = i + 1; } } return true;}
一样的,读socket然后处理
dispatchCommand 也是巨复杂的函数,选几行吧:
void FrameworkListener::dispatchCommand(SocketClient *cli, char *data) { ...... for (i = mCommands->begin(); i != mCommands->end(); ++i) { FrameworkCommand *c = *i; if (!strcmp(argv[0], c->getCommand())) { if (c->runCommand(cli, argc, argv)) { SLOGW("Handler '%s' error (%s)", c->getCommand(), strerror(errno)); } goto out; } }
开始注册了很多cmd,每个cmd都有自己的名字,比如“interface”
这里就会找到那个interface cmd
又是一个巨长的函数,就选list这个参数的一段吧
这就正好和最前面的"interface" "list"对应
int CommandListener::InterfaceCmd::runCommand(SocketClient *cli, int argc, char **argv) { if (argc < 2) { cli->sendMsg(ResponseCode::CommandSyntaxError, "Missing argument", false); return 0; } if (!strcmp(argv[1], "list")) { DIR *d; struct dirent *de; if (!(d = opendir("/sys/class/net"))) { cli->sendMsg(ResponseCode::OperationFailed, "Failed to open sysfs dir", true); return 0; } while((de = readdir(d))) { if (de->d_name[0] == '.') continue; cli->sendMsg(ResponseCode::InterfaceListResult, de->d_name, false); } closedir(d); cli->sendMsg(ResponseCode::CommandOkay, "Interface list completed", false); return 0;
很简单的处理,就是打开/sys/class/net ,即访问linux的sys文件系统
如果你的以太网的driver是好的的话,应该在/sys/class/net下面有个eth0(或类似的名字)
这里把/sys/class/net 下面所有接口名字都找出来,然后发给client
返回到EthernetDataTracker.startMonitoring()(跨度有点大啊)
InterfaceConfiguration config = mNMService.getInterfaceConfig(iface); mLinkUp = config.isActive(); if (config != null && mHwAddr == null) { mHwAddr = config.getHardwareAddress(); if (mHwAddr != null) { mNetworkInfo.setExtraInfo(mHwAddr); } } reconnect();
这里先得到该接口(我这是eth0)的一些信息保存下来
怎么得到的话估计又要找NativeDaemonConnector发送cmd 给netd了
最后reconnect()
public boolean reconnect() { if (mLinkUp) { mTeardownRequested.set(false); runDhcp(); } return mLinkUp; }
要dhcp了!
private void runDhcp() { Thread dhcpThread = new Thread(new Runnable() { public void run() { DhcpInfoInternal dhcpInfoInternal = new DhcpInfoInternal(); if (!NetworkUtils.runDhcp(mIface, dhcpInfoInternal)) { Log.e(TAG, "DHCP request error:" + NetworkUtils.getDhcpError()); return; } mLinkProperties = dhcpInfoInternal.makeLinkProperties(); mLinkProperties.setInterfaceName(mIface); mNetworkInfo.setDetailedState(DetailedState.CONNECTED, null, mHwAddr); Message msg = mCsHandler.obtainMessage(EVENT_STATE_CHANGED, mNetworkInfo); msg.sendToTarget(); } }); dhcpThread.start(); }
一直跟下去的话会到 native层
int dhcp_do_request(const char *interface, char *ipaddr, char *gateway, uint32_t *prefixLength, char *dns1, char *dns2, char *server, uint32_t *lease, char *vendorInfo){ char result_prop_name[PROPERTY_KEY_MAX]; char daemon_prop_name[PROPERTY_KEY_MAX]; char prop_value[PROPERTY_VALUE_MAX] = {'\0'}; char daemon_cmd[PROPERTY_VALUE_MAX * 2]; const char *ctrl_prop = "ctl.start"; const char *desired_status = "running"; /* Interface name after converting p2p0-p2p0-X to p2p to reuse system properties */ char p2p_interface[MAX_INTERFACE_LENGTH]; get_p2p_interface_replacement(interface, p2p_interface); snprintf(result_prop_name, sizeof(result_prop_name), "%s.%s.result", DHCP_PROP_NAME_PREFIX, p2p_interface); snprintf(daemon_prop_name, sizeof(daemon_prop_name), "%s_%s", DAEMON_PROP_NAME, p2p_interface); /* Erase any previous setting of the dhcp result property */ property_set(result_prop_name, ""); /* Start the daemon and wait until it's ready */ if (property_get(HOSTNAME_PROP_NAME, prop_value, NULL) && (prop_value[0] != '\0')) snprintf(daemon_cmd, sizeof(daemon_cmd), "%s_%s:-h %s %s", DAEMON_NAME, p2p_interface, prop_value, interface); else snprintf(daemon_cmd, sizeof(daemon_cmd), "%s_%s:%s", DAEMON_NAME, p2p_interface, interface); memset(prop_value, '\0', PROPERTY_VALUE_MAX); property_set(ctrl_prop, daemon_cmd); if (wait_for_property(daemon_prop_name, desired_status, 10) < 0) { snprintf(errmsg, sizeof(errmsg), "%s", "Timed out waiting for dhcpcd to start"); return -1; } /* Wait for the daemon to return a result */ if (wait_for_property(result_prop_name, NULL, 30) < 0) { snprintf(errmsg, sizeof(errmsg), "%s", "Timed out waiting for DHCP to finish"); return -1; } if (!property_get(result_prop_name, prop_value, NULL)) { /* shouldn't ever happen, given the success of wait_for_property() */ snprintf(errmsg, sizeof(errmsg), "%s", "DHCP result property was not set"); return -1; } if (strcmp(prop_value, "ok") == 0) { char dns_prop_name[PROPERTY_KEY_MAX]; if (fill_ip_info(interface, ipaddr, gateway, prefixLength, dns1, dns2, server, lease, vendorInfo) == -1) { return -1; } /* copy dns data to system properties - TODO - remove this after we have async * notification of renewal's */ snprintf(dns_prop_name, sizeof(dns_prop_name), "net.%s.dns1", interface); property_set(dns_prop_name, *dns1 ? ipaddr_to_string(*dns1) : ""); snprintf(dns_prop_name, sizeof(dns_prop_name), "net.%s.dns2", interface); property_set(dns_prop_name, *dns2 ? ipaddr_to_string(*dns2) : ""); return 0; } else { snprintf(errmsg, sizeof(errmsg), "DHCP result was %s", prop_value); return -1; }}
- 看看写写 之 Android上Ethernet和netd那点事
- 看看写写 之 Android上Ethernet和netd那点事
- 编译和链接那点事<上>
- Android之测试前期那点事
- Android之测试后期那点事
- Android之登录那点事
- Android HTTP和HTTPS那点事
- android那点事
- Android那点事-系列之(一)Parcelable和Serializable的区别与使用
- Android Netd
- Android之测试执行期那点事
- android adapter那点事
- android 屏幕那点事
- android 启动那点事
- Android GC 那点事
- Android GC 那点事
- Android GC 那点事
- android 线程那点事
- 初始化列表顺序
- 搞特殊,呵呵。
- Overload和Override的区别。Overloaded的方法是否可以改变返回值的类型?
- 首发Zend Studio 9.0.4正式版注册破解(2012-10-12更新)
- web.cofig和Global.asax关系详解
- 看看写写 之 Android上Ethernet和netd那点事
- Android之EditText 属性汇总
- CentOS6.3采用server方式安装后网卡不能加载或启动时不能连接网卡的问题
- 为Android添加一门新语言
- 关于Hibernate的内存查询
- 关于圆的计算
- 数据库设计中的14个技巧 .
- 论述如何用UML进行系统分析与设计建模
- wctomb宽字符到多字节字符节的转换