Android Volley完全解析(四),从源码的角度理解Volley

来源:互联网 发布:鸟哥linux基础篇 编辑:程序博客网 时间:2024/06/06 04:55
经过前三篇文章的学习,Volley的用法我们已经掌握的差不多了,但是对于Volley的工作原理,恐怕有很多朋友还不是很清楚。因此,本篇文章中我们就来一起阅读一下Volley的源码,将它的工作流程整体地梳理一遍。同时,这也是Volley系列的最后一篇文章了。

其实,Volley的官方文档中本身就附有了一张Volley的工作流程图,如下图所示。

多数朋友突然看到一张这样的图,应该会和我一样,感觉一头雾水吧?没错,目前我们对Volley背后的工作原理还没有一个概念性的理解,直接就来看这张图自然会有些吃力。不过没关系,下面我们就去分析一下Volley的源码,之后再重新来看这张图就会好理解多了。

说起分析源码,那么应该从哪儿开始看起呢?这就要回顾一下Volley的用法了,还记得吗,使用Volley的第一步,首先要调用Volley.newRequestQueue(context)方法来获取一个RequestQueue对象,那么我们自然要从这个方法开始看起了,代码如下所示:

public static RequestQueue newRequestQueue(Context context) {      return newRequestQueue(context, null);  }  

这个方法仅仅只有一行代码,只是调用了newRequestQueue()的方法重载,并给第二个参数传入null。那我们看下带有两个参数的newRequestQueue()方法中的代码,如下所示:

public static RequestQueue newRequestQueue(Context context, HttpStack stack) {      File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);      String userAgent = "volley/0";      try {          String packageName = context.getPackageName();          PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);          userAgent = packageName + "/" + info.versionCode;      } catch (NameNotFoundException e) {      }      if (stack == null) {          if (Build.VERSION.SDK_INT >= 9) {              stack = new HurlStack();          } else {              stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));          }      }      Network network = new BasicNetwork(stack);      RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);      queue.start();      return queue;  }  

看到,这里在第10行判断如果stack是等于null的,则去创建一个HttpStack对象,这里会判断如果手机系统版本号是大于9的,则创建一个HurlStack的实例,否则就创建一个HttpClientStack的实例。实际上HurlStack的内部就是使用HttpURLConnection进行网络通讯的,而HttpClientStack的内部则是使用HttpClient进行网络通讯的,创建好了HttpStack之后,接下来又创建了一个Network对象,它是用于根据传入的HttpStack对象来处理网络请求的,紧接着new出一个RequestQueue对象,并调用它的start()方法进行启动,然后将RequestQueue返回,这样newRequestQueue()的方法就执行结束了。

那么RequestQueue的start()方法内部到底执行了什么东西呢?我们跟进去瞧一瞧:

public void start() {      stop();  // Make sure any currently running dispatchers are stopped.      // Create the cache dispatcher and start it.      mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);      mCacheDispatcher.start();      // Create network dispatchers (and corresponding threads) up to the pool size.      for (int i = 0; i < mDispatchers.length; i++) {          NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,                  mCache, mDelivery);          mDispatchers[i] = networkDispatcher;          networkDispatcher.start();      }  }  

这里先是创建了一个CacheDispatcher的实例,然后调用了它的start()方法,接着在一个for循环里去创建NetworkDispatcher的实例,并分别调用它们的start()方法。这里的CacheDispatcher和NetworkDispatcher都是继承自Thread的,而默认情况下for循环会执行四次,也就是说当调用了Volley.newRequestQueue(context)之后,就会有五个线程一直在后台运行,不断等待网络请求的到来,其中CacheDispatcher是缓存线程,NetworkDispatcher是网络请求线程。

得到了RequestQueue之后,我们只需要构建出相应的Request,然后调用RequestQueue的add()方法将Request传入就可以完成网络请求操作了,那么不用说,add()方法的内部肯定有着非常复杂的逻辑,我们来一起看一下:

public <T> Request<T> add(Request<T> request) {      // Tag the request as belonging to this queue and add it to the set of current requests.      request.setRequestQueue(this);      synchronized (mCurrentRequests) {          mCurrentRequests.add(request);      }      // Process requests in the order they are added.      request.setSequence(getSequenceNumber());      request.addMarker("add-to-queue");      // If the request is uncacheable, skip the cache queue and go straight to the network.      if (!request.shouldCache()) {          mNetworkQueue.add(request);          return request;      }      // Insert request into stage if there's already a request with the same cache key in flight.      synchronized (mWaitingRequests) {          String cacheKey = request.getCacheKey();          if (mWaitingRequests.containsKey(cacheKey)) {              // There is already a request in flight. Queue up.              Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);              if (stagedRequests == null) {                  stagedRequests = new LinkedList<Request<?>>();              }              stagedRequests.add(request);              mWaitingRequests.put(cacheKey, stagedRequests);              if (VolleyLog.DEBUG) {                  VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);              }          } else {              // Insert 'null' queue for this cacheKey, indicating there is now a request in              // flight.              mWaitingRequests.put(cacheKey, null);              mCacheQueue.add(request);          }          return request;      }  }  

可以看到,在第11行的时候会判断当前的请求是否可以缓存,如果不能缓存则在第12行直接将这条请求加入网络请求队列,可以缓存的话则在第33行将这条请求加入缓存队列。在默认情况下,每条请求都是可以缓存的,当然我们也可以调用Request的setShouldCache(false)方法来改变这一默认行为。

OK,那么既然默认每条请求都是可以缓存的,自然就被添加到了缓存队列中,于是一直在后台等待的缓存线程就要开始运行起来了,我们看下CacheDispatcher中的run()方法,代码如下所示:

public class CacheDispatcher extends Thread {        ……        @Override      public void run() {          if (DEBUG) VolleyLog.v("start new dispatcher");          Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);          // Make a blocking call to initialize the cache.          mCache.initialize();          while (true) {              try {                  // Get a request from the cache triage queue, blocking until                  // at least one is available.                  final Request<?> request = mCacheQueue.take();                  request.addMarker("cache-queue-take");                  // If the request has been canceled, don't bother dispatching it.                  if (request.isCanceled()) {                      request.finish("cache-discard-canceled");                      continue;                  }                  // Attempt to retrieve this item from cache.                  Cache.Entry entry = mCache.get(request.getCacheKey());                  if (entry == null) {                      request.addMarker("cache-miss");                      // Cache miss; send off to the network dispatcher.                      mNetworkQueue.put(request);                      continue;                  }                  // If it is completely expired, just send it to the network.                  if (entry.isExpired()) {                      request.addMarker("cache-hit-expired");                      request.setCacheEntry(entry);                      mNetworkQueue.put(request);                      continue;                  }                  // We have a cache hit; parse its data for delivery back to the request.                  request.addMarker("cache-hit");                  Response<?> response = request.parseNetworkResponse(                          new NetworkResponse(entry.data, entry.responseHeaders));                  request.addMarker("cache-hit-parsed");                  if (!entry.refreshNeeded()) {                      // Completely unexpired cache hit. Just deliver the response.                      mDelivery.postResponse(request, response);                  } else {                      // Soft-expired cache hit. We can deliver the cached response,                      // but we need to also send the request to the network for                      // refreshing.                      request.addMarker("cache-hit-refresh-needed");                      request.setCacheEntry(entry);                      // Mark the response as intermediate.                      response.intermediate = true;                      // Post the intermediate response back to the user and have                      // the delivery then forward the request along to the network.                      mDelivery.postResponse(request, response, new Runnable() {                          @Override                          public void run() {                              try {                                  mNetworkQueue.put(request);                              } catch (InterruptedException e) {                                  // Not much we can do about this.                              }                          }                      });                  }              } catch (InterruptedException e) {                  // We may have been interrupted because it was time to quit.                  if (mQuit) {                      return;                  }                  continue;              }          }      }  }  

代码有点长,我们只挑重点看。首先在11行可以看到一个while(true)循环,说明缓存线程始终是在运行的,接着在第23行会尝试从缓存当中取出响应结果,如何为空的话则把这条请求加入到网络请求队列中,如果不为空的话再判断该缓存是否已过期,如果已经过期了则同样把这条请求加入到网络请求队列中,否则就认为不需要重发网络请求,直接使用缓存中的数据即可。之后会在第39行调用Request的parseNetworkResponse()方法来对数据进行解析,再往后就是将解析出来的数据进行回调了,这部分代码我们先跳过,因为它的逻辑和NetworkDispatcher后半部分的逻辑是基本相同的,那么我们等下合并在一起看就好了,先来看一下NetworkDispatcher中是怎么处理网络请求队列的,代码如下所示:





private class ResponseDeliveryRunnable implements Runnable {      private final Request mRequest;      private final Response mResponse;      private final Runnable mRunnable;        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {          mRequest = request;          mResponse = response;          mRunnable = runnable;      }        @SuppressWarnings("unchecked")      @Override      public void run() {          // If this request has canceled, finish it and don't deliver.          if (mRequest.isCanceled()) {              mRequest.finish("canceled-at-delivery");              return;          }          // Deliver a normal response or error, depending.          if (mResponse.isSuccess()) {              mRequest.deliverResponse(mResponse.result);          } else {              mRequest.deliverError(mResponse.error);          }          // If this is an intermediate response, add a marker, otherwise we're done          // and the request can be finished.          if (mResponse.intermediate) {              mRequest.addMarker("intermediate-response");          } else {              mRequest.finish("done");          }          // If we have been provided a post-delivery runnable, run it.          if (mRunnable != null) {              mRunnable.run();          }     }  }  

代码虽然不多,但我们并不需要行行阅读,抓住重点看即可。其中在第22行调用了Request的deliverResponse()方法,有没有感觉很熟悉?没错,这个就是我们在自定义Request时需要重写的另外一个方法,每一条网络请求的响应都是回调到这个方法中,最后我们再在这个方法中将响应的数据回调到Response.Listener的onResponse()方法中就可以了。

好了,到这里我们就把Volley的完整执行流程全部梳理了一遍,你是不是已经感觉已经很清晰了呢?对了,还记得在文章一开始的那张流程图吗,刚才还不能理解,现在我们再来重新看下这张图:

其中蓝色部分代表主线程,绿色部分代表缓存线程,橙色部分代表网络线程。我们在主线程中调用RequestQueue的add()方法来添加一条网络请求,这条请求会先被加入到缓存队列当中,如果发现可以找到相应的缓存结果就直接读取缓存并解析,然后回调给主线程。如果在缓存中没有找到结果,则将这条请求加入到网络请求队列中,然后处理发送HTTP请求,解析响应结果,写入缓存,并回调主线程。




0 0
原创粉丝点击