Android 8.0系统源码分析--Looper、MessageQueue创建过程分析

来源:互联网 发布:泰迪体质 知乎 编辑:程序博客网 时间:2024/06/18 16:22

     所有的进程都基于消息驱动的,进程的主线程在一个无限循环中不断的处理消息,直到进程退出,Android也是这样的实现,当Process进程启动时,framework会为当前启动的进程创建好Looper循环,Looper就在主线程的MessageQueue上不断的取消息去处理,没有消息时就进入休眠。对于下老罗的书和8.0的Android源码,Looper循环的源码基本上没有多大改变,也是因为这个模型和Binder进程间通信机制一样,已经非常成熟了。这节我们就来看一下当前进程的Looper、MessageQueue在进程启动时是如何创建出来的。

     进程启动的入口函数就是ActivityThread类的main函数,后面我们会分析进程Process的启动过程,大家就会看到这一点,ActivityThread类的目录路径为frameworks\base\core\java\android\app\ActivityThread.java,它的main函数的源码如下:

    public static void main(String[] args) {        Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "ActivityThreadMain");        SamplingProfilerIntegration.start();        // CloseGuard defaults to true and can be quite spammy.  We        // disable it here, but selectively enable it later (via        // StrictMode) on debug builds, but using DropBox, not logs.        CloseGuard.setEnabled(false);        Environment.initForCurrentUser();        // Set the reporter for event logging in libcore        EventLogger.setReporter(new EventLoggingReporter());        // Make sure TrustedCertificateStore looks in the right place for CA certificates        final File configDir = Environment.getUserConfigDirectory(UserHandle.myUserId());        TrustedCertificateStore.setDefaultUserDirectory(configDir);        Process.setArgV0("<pre-initialized>");        Looper.prepareMainLooper();        ActivityThread thread = new ActivityThread();        thread.attach(false);        if (sMainThreadHandler == null) {            sMainThreadHandler = thread.getHandler();        }        if (false) {            Looper.myLooper().setMessageLogging(new                    LogPrinter(Log.DEBUG, "ActivityThread"));        }        // End of event ActivityThreadMain.        Trace.traceEnd(Trace.TRACE_TAG_ACTIVITY_MANAGER);        Looper.loop();        throw new RuntimeException("Main thread loop unexpectedly exited");    }
     
     该方法中最重要的就是Looper.prepareMainLooper()、Looper.loop()这两句逻辑了。第一名是为我们当前的主线程准备Looper循环,是直接调用Looper类的静态函数来完成的;第二句就是开始循环,不断的从刚才创建好的消息队列MessageQueue中取消息进行处理。我们先来看第一句的实现,完成后再回头来分析第二句。Looper类的目录路径为frameworks\base\core\java\android\os\Looper.java,它的静态成员函数prepareMainLooper的源码如下:

    /**     * Initialize the current thread as a looper, marking it as an     * application's main looper. The main looper for your application     * is created by the Android environment, so you should never need     * to call this function yourself.  See also: {@link #prepare()}     */    public static void prepareMainLooper() {        prepare(false);        synchronized (Looper.class) {            if (sMainLooper != null) {                throw new IllegalStateException("The main Looper has already been prepared.");            }            sMainLooper = myLooper();        }    }

     先调用prepare来进行主要成员变量的初始化,传入boolean类型的参数false最终会传到MessageQueue的构造函数中,意思是指当前的消息队列是否允许退出,因为我们当前创建的消息队列是进程的主线程的消息队列,所以是不允许主动退出的。初始化完成后,接着调用myLooper方法将返回值赋值给成员变量sMainLooper,它也是一个Looper类型的成员变量,从synchronized同步代码块中的判断就可以看出,如果sMainLooper不为空,则会抛出非法的状态异常,就是说prepareMainLooper方法只允许调用一次。好,我们接着再来看一下prepare方法的实现,源码如下:

    private static void prepare(boolean quitAllowed) {        if (sThreadLocal.get() != null) {            throw new RuntimeException("Only one Looper may be created per thread");        }        sThreadLocal.set(new Looper(quitAllowed));    }

     sThreadLocal是Looper类的类型为ThreadLocal<Looper>的成员变量,它是线程单例的,该方法中的if判断也可以看出,该方法也只能调用一次,当进程刚启动的时候,sThreadLocal.get()方法的返回值为空,于是执行Looper类的构造方法创建Looper对象,并赋值给sThreadLocal的范型。Looper类的构造方法源码如下:

    private Looper(boolean quitAllowed) {        mQueue = new MessageQueue(quitAllowed);        mThread = Thread.currentThread();    }

     这里就是调用MessageQueue的构造方法创建一个MessageQueue队列,然后赋值给成员变量mQueue,接着给成员变量mThread赋值,因为Looper和MessageQueue对象都是和线程对应的,我们肯定要知道当前的Looper和MessageQueue对象属于哪条线程,就是通过成员变量mThread来判断的。我们继续分析MessageQueue的构造方法。MessageQueue类的目录路径为frameworks\base\core\java\android\os\MessageQueue.java,它的构造方法的源码如下:

    MessageQueue(boolean quitAllowed) {        mQuitAllowed = quitAllowed;        mPtr = nativeInit();    }

     这里主要就是执行nativeInit来进行初始化,nativeInit方法的实现在android_os_MessageQueue.cpp文件中,目录路径为frameworks\base\core\jni\android_os_MessageQueue.cpp,JNI中的实现是android_os_MessageQueue_nativeInit方法,源码如下:

static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {    NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();    if (!nativeMessageQueue) {        jniThrowRuntimeException(env, "Unable to allocate native queue");        return 0;    }    nativeMessageQueue->incStrong(env);    return reinterpret_cast<jlong>(nativeMessageQueue);}

     这里主要就是创建一个native层的NativeMessageQueue,如果创建失败,则抛出异常,否则将native层nativeMessageQueue对象的指针地址返回给java层。我们接着来看一下NativeMessageQueue类的构造方法。NativeMessageQueue类的构造方法的实现也是在android_os_MessageQueue.cpp文件中,源码如下:

NativeMessageQueue::NativeMessageQueue() :        mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {    mLooper = Looper::getForThread();    if (mLooper == NULL) {        mLooper = new Looper(false);        Looper::setForThread(mLooper);    }}

     这里就直接调用Looper::getForThread()获取native层的Looper,如果为空,则调用native层的Looper构造函数创建,然后赋值给native层的Looper。我们接着来看一下native层的Looper类的构造函数,它的实现在Looper.cpp文件中,目录路径为system\core\libutils\Looper.cpp,构造方法的源码如下:

Looper::Looper(bool allowNonCallbacks) :        mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),        mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),        mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {    mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);    LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd: %s",                        strerror(errno));    AutoMutex _l(mLock);    rebuildEpollLocked();}

     该方法中首先调用eventfd系统函数,该函数返回一个文件描述符,与打开的其他文件一样,可以进行读写操作。然后调用rebuildEpollLocked函数继续进行后续的初始化,该函数的源码如下:

void Looper::rebuildEpollLocked() {    // Close old epoll instance if we have one.    if (mEpollFd >= 0) {#if DEBUG_CALLBACKS        ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);#endif        close(mEpollFd);    }    // Allocate the new epoll instance and register the wake pipe.    mEpollFd = epoll_create(EPOLL_SIZE_HINT);    LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance: %s", strerror(errno));    struct epoll_event eventItem;    memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union    eventItem.events = EPOLLIN;    eventItem.data.fd = mWakeEventFd;    int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);    LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance: %s",                        strerror(errno));    for (size_t i = 0; i < mRequests.size(); i++) {        const Request& request = mRequests.valueAt(i);        struct epoll_event eventItem;        request.initEventItem(&eventItem);        int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);        if (epollResult < 0) {            ALOGE("Error adding epoll events for fd %d while rebuilding epoll set: %s",                  request.fd, strerror(errno));        }    }}

     从该方法的名字也比较容易理解,就是为我们的主线程创建Epoll循环的结构体。如果成员变量mEpollFd的值大于等于0,则说明之前已创建过,所以需要先关闭。Linux中的epoll机制的实现基本上是分三步,网上的说明也很多,大家如果想详细了解的话,可以去查。首先调用epoll_create系统函数,该方法的返回值是一个文件描述符,然后初始化一个类型为epoll_event为局部变量eventItem,接着调用epoll_ctl,传入的第二个参数为EPOLL_CTL_ADD,表示我们当前要添加节点,最后的for循环将成员变量mRequests中的元素也一一添加进去。由于对Linux的epoll机制完全不懂,所以这里的逻辑也不是很明白。该方法执行完,我们的epoll节点添加进去之后,那么初始化的工作就结束了。framework中为我们创建好的java层的Looper、MessageQueue和native层的Looper、NativeMessageQueue都已经准备好了,epoll机制相应的节点也注册好了。

     下面我们接着来看一下ActivityThread类的main方法中的Looper.loop()的实现。Looper类的loop方法的源码如下:

    /**     * Run the message queue in this thread. Be sure to call     * {@link #quit()} to end the loop.     */    public static void loop() {        final Looper me = myLooper();        if (me == null) {            throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");        }        final MessageQueue queue = me.mQueue;        // Make sure the identity of this thread is that of the local process,        // and keep track of what that identity token actually is.        Binder.clearCallingIdentity();        final long ident = Binder.clearCallingIdentity();        for (;;) {            Message msg = queue.next(); // might block            if (msg == null) {                // No message indicates that the message queue is quitting.                return;            }            // This must be in a local variable, in case a UI event sets the logger            final Printer logging = me.mLogging;            if (logging != null) {                logging.println(">>>>> Dispatching to " + msg.target + " " +                        msg.callback + ": " + msg.what);            }            final long slowDispatchThresholdMs = me.mSlowDispatchThresholdMs;            final long traceTag = me.mTraceTag;            if (traceTag != 0 && Trace.isTagEnabled(traceTag)) {                Trace.traceBegin(traceTag, msg.target.getTraceName(msg));            }            final long start = (slowDispatchThresholdMs == 0) ? 0 : SystemClock.uptimeMillis();            final long end;            try {                msg.target.dispatchMessage(msg);                end = (slowDispatchThresholdMs == 0) ? 0 : SystemClock.uptimeMillis();            } finally {                if (traceTag != 0) {                    Trace.traceEnd(traceTag);                }            }            if (slowDispatchThresholdMs > 0) {                final long time = end - start;                if (time > slowDispatchThresholdMs) {                    Slog.w(TAG, "Dispatch took " + time + "ms on "                            + Thread.currentThread().getName() + ", h=" +                            msg.target + " cb=" + msg.callback + " msg=" + msg.what);                }            }            if (logging != null) {                logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);            }            // Make sure that during the course of dispatching the            // identity of the thread wasn't corrupted.            final long newIdent = Binder.clearCallingIdentity();            if (ident != newIdent) {                Log.wtf(TAG, "Thread identity changed from 0x"                        + Long.toHexString(ident) + " to 0x"                        + Long.toHexString(newIdent) + " while dispatching to "                        + msg.target.getClass().getName() + " "                        + msg.callback + " what=" + msg.what);            }            msg.recycleUnchecked();        }    }

     先调用myLooper方法来判断前面的准备工作是否完成,如果准备工作都出错,那就直接运行时异常。接着一个for (;;) 无限循环就去取消息了。queue.next()就是取下一个消息,后面的注释也非常清楚,该方法可能会阻塞,如果取到的msg为空,则说明消息循环要退出了,则直接return。取到下一个消息msg之后,就调用msg.target.dispatchMessage(msg)将它分发给目标进行处理,msg的成员变量target的类型为Handler,它是在我们往当前的MessageQueue消息队列上发送消息时指定的,后面我们分析Message的发送过程时就会看到,分发完成后调用sg.recycleUnchecked()来将当前的msg回收掉。Message对象的构建也是使用了一个缓存池,跟我们之前分析Binder进程间通信机制中的Parcel一样,因为消息循环是非常频繁的,所以使用缓存池可以有效的减少无用内存的分配,非常必要。接下来我们就重点看一下queue.next()是如何取到下一条消息的,该方法的实现在MessageQueue类中,目录路径为frameworks\base\core\java\android\os\MessageQueue.java,next方法的源码如下:

    Message next() {        // Return here if the message loop has already quit and been disposed.        // This can happen if the application tries to restart a looper after quit        // which is not supported.        final long ptr = mPtr;        if (ptr == 0) {            return null;        }        int pendingIdleHandlerCount = -1; // -1 only during first iteration        int nextPollTimeoutMillis = 0;        for (;;) {            if (nextPollTimeoutMillis != 0) {                Binder.flushPendingCommands();            }            nativePollOnce(ptr, nextPollTimeoutMillis);            synchronized (this) {                // Try to retrieve the next message.  Return if found.                final long now = SystemClock.uptimeMillis();                Message prevMsg = null;                Message msg = mMessages;                if (msg != null && msg.target == null) {                    // Stalled by a barrier.  Find the next asynchronous message in the queue.                    do {                        prevMsg = msg;                        msg = msg.next;                    } while (msg != null && !msg.isAsynchronous());                }                if (msg != null) {                    if (now < msg.when) {                        // Next message is not ready.  Set a timeout to wake up when it is ready.                        nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);                    } else {                        // Got a message.                        mBlocked = false;                        if (prevMsg != null) {                            prevMsg.next = msg.next;                        } else {                            mMessages = msg.next;                        }                        msg.next = null;                        if (DEBUG) Log.v(TAG, "Returning message: " + msg);                        msg.markInUse();                        return msg;                    }                } else {                    // No more messages.                    nextPollTimeoutMillis = -1;                }                // Process the quit message now that all pending messages have been handled.                if (mQuitting) {                    dispose();                    return null;                }                // If first time idle, then get the number of idlers to run.                // Idle handles only run if the queue is empty or if the first message                // in the queue (possibly a barrier) is due to be handled in the future.                if (pendingIdleHandlerCount < 0                        && (mMessages == null || now < mMessages.when)) {                    pendingIdleHandlerCount = mIdleHandlers.size();                }                if (pendingIdleHandlerCount <= 0) {                    // No idle handlers to run.  Loop and wait some more.                    mBlocked = true;                    continue;                }                if (mPendingIdleHandlers == null) {                    mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)];                }                mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers);            }            // Run the idle handlers.            // We only ever reach this code block during the first iteration.            for (int i = 0; i < pendingIdleHandlerCount; i++) {                final IdleHandler idler = mPendingIdleHandlers[i];                mPendingIdleHandlers[i] = null; // release the reference to the handler                boolean keep = false;                try {                    keep = idler.queueIdle();                } catch (Throwable t) {                    Log.wtf(TAG, "IdleHandler threw exception", t);                }                if (!keep) {                    synchronized (this) {                        mIdleHandlers.remove(idler);                    }                }            }            // Reset the idle handler count to 0 so we do not run them again.            pendingIdleHandlerCount = 0;            // While calling an idle handler, a new message could have been delivered            // so go back and look again for a pending message without waiting.            nextPollTimeoutMillis = 0;        }    }

     首先还是参数校验,如果成员变量mPtr为0,那么就说明native层的MessageQueue对象初始化失败了,直接返回null,下面又是一个for (;;) 无限循环。所有发送过来的消息最终都会挂载在成员变量mMessages上,它的类型为Message,Message类又有一个类型为Message的成员变量next,相当于Message类就是单向链表,所以我们发送过来的消息会不断的往上挂载,从mMessages上取下一个消息msg,如果不为空且当前的时间小于该目标msg要求的处理时间(now < msg.when),则说明时间未到,那么就需要休眠,休眠的时间长短就是nextPollTimeoutMillis了;否则就说明时间到了,需要立即处理该消息,则将该消息返回给Looper类的loop方法中进行处理。如果取到的msg为空,那么再检查一下是否有一些IdleHandler需要处理,所有的IdleHandler都是保存在成员变量mIdleHandlers当中的,这也是为了高效的利用线程的时间片。这里需要说明一下keep = idler.queueIdle()这句逻辑,它的返回keep的意思就是说是否要保留,如果返回false,那么接下来就会将该IdleHandler从mIdleHandlers当中移除,那么下次再循环时,就不会再调用该IdleHandler的queueIdle方法了。

     下面我们来看看当取到的msg的要求处理时间未到时,当前线程是如何休眠的,休眠时间已经计算好了,就是nextPollTimeoutMillis,该场景下肯定是大于0的,那么就调用nativePollOnce来进行处理,该方法的源码如下:

static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jobject obj,        jlong ptr, jint timeoutMillis) {    NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);    nativeMessageQueue->pollOnce(env, obj, timeoutMillis);}

     取到我们在native层创建好的NativeMessageQueue,然后调用它的pollOnce继续处理,pollOnce方法的源码如下:

void NativeMessageQueue::pollOnce(JNIEnv* env, jobject pollObj, int timeoutMillis) {    mPollEnv = env;    mPollObj = pollObj;    mLooper->pollOnce(timeoutMillis);    mPollObj = NULL;    mPollEnv = NULL;    if (mExceptionObj) {        env->Throw(mExceptionObj);        env->DeleteLocalRef(mExceptionObj);        mExceptionObj = NULL;    }}

     这里主要还是调用native层的Looper类的pollOnce继续处理,源码如下:

int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {    int result = 0;    for (;;) {        while (mResponseIndex < mResponses.size()) {            const Response& response = mResponses.itemAt(mResponseIndex++);            int ident = response.request.ident;            if (ident >= 0) {                int fd = response.request.fd;                int events = response.events;                void* data = response.request.data;#if DEBUG_POLL_AND_WAKE                ALOGD("%p ~ pollOnce - returning signalled identifier %d: "                        "fd=%d, events=0x%x, data=%p",                        this, ident, fd, events, data);#endif                if (outFd != NULL) *outFd = fd;                if (outEvents != NULL) *outEvents = events;                if (outData != NULL) *outData = data;                return ident;            }        }        if (result != 0) {#if DEBUG_POLL_AND_WAKE            ALOGD("%p ~ pollOnce - returning result %d", this, result);#endif            if (outFd != NULL) *outFd = 0;            if (outEvents != NULL) *outEvents = 0;            if (outData != NULL) *outData = NULL;            return result;        }        result = pollInner(timeoutMillis);    }}

     这里主要还是调用pollInner进一步处理,pollInner方法的源码如下:

int Looper::pollInner(int timeoutMillis) {#if DEBUG_POLL_AND_WAKE    ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d", this, timeoutMillis);#endif    // Adjust the timeout based on when the next message is due.    if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) {        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);        int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);        if (messageTimeoutMillis >= 0                && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {            timeoutMillis = messageTimeoutMillis;        }#if DEBUG_POLL_AND_WAKE        ALOGD("%p ~ pollOnce - next message in %" PRId64 "ns, adjusted timeout: timeoutMillis=%d",                this, mNextMessageUptime - now, timeoutMillis);#endif    }    // Poll.    int result = POLL_WAKE;    mResponses.clear();    mResponseIndex = 0;    // We are about to idle.    mPolling = true;    struct epoll_event eventItems[EPOLL_MAX_EVENTS];    int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);    // No longer idling.    mPolling = false;    // Acquire lock.    mLock.lock();    // Rebuild epoll set if needed.    if (mEpollRebuildRequired) {        mEpollRebuildRequired = false;        rebuildEpollLocked();        goto Done;    }    // Check for poll error.    if (eventCount < 0) {        if (errno == EINTR) {            goto Done;        }        ALOGW("Poll failed with an unexpected error: %s", strerror(errno));        result = POLL_ERROR;        goto Done;    }    // Check for poll timeout.    if (eventCount == 0) {#if DEBUG_POLL_AND_WAKE        ALOGD("%p ~ pollOnce - timeout", this);#endif        result = POLL_TIMEOUT;        goto Done;    }    // Handle all events.#if DEBUG_POLL_AND_WAKE    ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount);#endif    for (int i = 0; i < eventCount; i++) {        int fd = eventItems[i].data.fd;        uint32_t epollEvents = eventItems[i].events;        if (fd == mWakeEventFd) {            if (epollEvents & EPOLLIN) {                awoken();            } else {                ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);            }        } else {            ssize_t requestIndex = mRequests.indexOfKey(fd);            if (requestIndex >= 0) {                int events = 0;                if (epollEvents & EPOLLIN) events |= EVENT_INPUT;                if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;                if (epollEvents & EPOLLERR) events |= EVENT_ERROR;                if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;                pushResponse(events, mRequests.valueAt(requestIndex));            } else {                ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "                        "no longer registered.", epollEvents, fd);            }        }    }Done: ;    // Invoke pending message callbacks.    mNextMessageUptime = LLONG_MAX;    while (mMessageEnvelopes.size() != 0) {        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);        const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);        if (messageEnvelope.uptime <= now) {            // Remove the envelope from the list.            // We keep a strong reference to the handler until the call to handleMessage            // finishes.  Then we drop it so that the handler can be deleted *before*            // we reacquire our lock.            { // obtain handler                sp<MessageHandler> handler = messageEnvelope.handler;                Message message = messageEnvelope.message;                mMessageEnvelopes.removeAt(0);                mSendingMessage = true;                mLock.unlock();#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS                ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",                        this, handler.get(), message.what);#endif                handler->handleMessage(message);            } // release handler            mLock.lock();            mSendingMessage = false;            result = POLL_CALLBACK;        } else {            // The last message left at the head of the queue determines the next wakeup time.            mNextMessageUptime = messageEnvelope.uptime;            break;        }    }    // Release lock.    mLock.unlock();    // Invoke all response callbacks.    for (size_t i = 0; i < mResponses.size(); i++) {        Response& response = mResponses.editItemAt(i);        if (response.request.ident == POLL_CALLBACK) {            int fd = response.request.fd;            int events = response.events;            void* data = response.request.data;#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS            ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",                    this, response.request.callback.get(), fd, events, data);#endif            // Invoke the callback.  Note that the file descriptor may be closed by            // the callback (and potentially even reused) before the function returns so            // we need to be a little careful when removing the file descriptor afterwards.            int callbackResult = response.request.callback->handleEvent(fd, events, data);            if (callbackResult == 0) {                removeFd(fd, response.request.seq);            }            // Clear the callback reference in the response structure promptly because we            // will not clear the response vector itself until the next poll.            response.request.callback.clear();            result = POLL_CALLBACK;        }    }    return result;}

     该方法的参数timeoutMillis就是我们下一个消息的等待时间了,那么在调用epoll_wait系统函数时,就会将我们当前的线程休眠了。休眠时间到了之后,epoll_wait就会返回,那我们再次检查消息队列时,就会有符合要求的消息了。

     好,Looper、MessageQueue消息队列的创建过程我们就分析到这里,下节我们再继续分析Message消息的发送和处理过程。

阅读全文
0 0
原创粉丝点击