Volley(三)

来源:互联网 发布:淘宝网创业计划书 编辑:程序博客网 时间:2024/06/06 15:37

Volley大致的功能已经基本学完了,现在我们来看看Volley内部源码实现过程。

很多讲解Volley的博客中都会有这样的中文或英文的图片,整个过程一共经历三个线程:主线程,缓存调度线程(CacheDispatcher)和网络调度线程(NetworkDispatcher)。主线程是手机此时app线程,缓存调度线程是在网络访问前在缓存中寻找,如果找到则从该cache中读取,网络调度线程是在缓存中没有找到后,volley则会进行网络访问从服务器中读取数据。从图中可以看出整体的三个层次,依次从邻层中数据请求。

整个Volley开始前首先需要创建请求队列:RequestQueue。

public class Volley {    /** Default on-disk cache directory. */    private static final String DEFAULT_CACHE_DIR = "volley";    /**     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.     *     * @param context A {@link Context} to use for creating the cache dir.     * @param stack An {@link HttpStack} to use for the network, or null for default.     * @return A started {@link RequestQueue} instance.     */    public static RequestQueue newRequestQueue(Context context, HttpStack stack) {        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);        String userAgent = "volley/0";        try {            String packageName = context.getPackageName();            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);            userAgent = packageName + "/" + info.versionCode;        } catch (NameNotFoundException e) {        }        if (stack == null) {            if (Build.VERSION.SDK_INT >= 9) {                stack = new HurlStack();            } else {                // Prior to Gingerbread, HttpUrlConnection was unreliable.                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));            }        }        Network network = new BasicNetwork(stack);        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);        queue.start();        return queue;    }    /**     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.     *     * @param context A {@link Context} to use for creating the cache dir.     * @return A started {@link RequestQueue} instance.     */    public static RequestQueue newRequestQueue(Context context) {        return newRequestQueue(context, null);    }}
可以看到有两个创建请求队列的重载方法。可以看到在Android版本在大于2.3后就选择不同的Stack,这也是由于HttpClient和HttpURLConnection的分界线,而其内部也是有这两个HTTP请求方式其中一个进行网络通讯的。


我们可以看到在接近最后部分调用了 queue.start() 方法,我们来看看里面是什么:

public void start() {        stop();  // Make sure any currently running dispatchers are stopped.        // Create the cache dispatcher and start it.        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);        mCacheDispatcher.start();        // Create network dispatchers (and corresponding threads) up to the pool size.        for (int i = 0; i < mDispatchers.length; i++) {            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,                    mCache, mDelivery);            mDispatchers[i] = networkDispatcher;            networkDispatcher.start();        }}

这里出现了两个线程:CacheDispatcher 和 NetworkDispatcher。开始提到过这两个一个是缓存调度线程,另一个是网络调度线程,这两个线程就是在这里产生并开启的。可以清楚看到CacheDispatcher线程只有一个,而NetworkDispatcher线程可能不止一个,你可以在多次跳转之后可以看到他的默认线程数是4个,也就是4+1=5条线程将会在 start 方法中开启。


之后就需要创建自己需要的Request请求对象并加入至队列中就可以等待响应了。让我们看一下添加至队列中的情况:

public <T> Request<T> add(Request<T> request) {        // 标签请求属于这个队列,并添加到当前的请求        request.setRequestQueue(this);        synchronized (mCurrentRequests) {            mCurrentRequests.add(request);        }        // 处理请求的顺序添加        request.setSequence(getSequenceNumber());        request.addMarker("add-to-queue");        // 如果请求不能换存,则添加至网络队列        if (!request.shouldCache()) {            mNetworkQueue.add(request);            return request;        }        // 查看是否有重复请求        synchronized (mWaitingRequests) {            String cacheKey = request.getCacheKey();            if (mWaitingRequests.containsKey(cacheKey)) {                // There is already a request in flight. Queue up.                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);                if (stagedRequests == null) {                    stagedRequests = new LinkedList<Request<?>>();                }                stagedRequests.add(request);                mWaitingRequests.put(cacheKey, stagedRequests);                if (VolleyLog.DEBUG) {                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);                }            } else {                // Insert 'null' queue for this cacheKey, indicating there is now a request in                // flight.                mWaitingRequests.put(cacheKey, null);                mCacheQueue.add(request);            }            return request;        }}
可以看到在将 Request请求加入至Set集合后,添加请求顺序后就出现一个判断语句 request.shouldCache(),这是判断请求是否

可以缓存。mWaitingRequest 是一个Map集合,是一个请求记录表。如果在表中找到了此次请求的缓

存,则从表中找到相应的队列,并将我们的请求加入至该队列使他不在队列中出现重复;如果表中没有找到此次请求,则会将这次

求加入到表和缓存请求队列中。

从上面看出,里面并没有网络请求,而是进行了一次缓存读取的过程和更新就结束了。竟然操作都缓存调度线程中,那我们进

去看看:

public class CacheDispatcher extends Thread {    ........    @Override    public void run() {        if (DEBUG) VolleyLog.v("start new dispatcher");        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);        // Make a blocking call to initialize the cache.        mCache.initialize();        while (true) {            try {                // Get a request from the cache triage queue, blocking until                // at least one is available.                final Request<?> request = mCacheQueue.take();                request.addMarker("cache-queue-take");                // If the request has been canceled, don't bother dispatching it.                if (request.isCanceled()) {                    request.finish("cache-discard-canceled");                    continue;                }                // Attempt to retrieve this item from cache.                Cache.Entry entry = mCache.get(request.getCacheKey());                if (entry == null) {                    request.addMarker("cache-miss");                    // Cache miss; send off to the network dispatcher.                    mNetworkQueue.put(request);                    continue;                }                // If it is completely expired, just send it to the network.                if (entry.isExpired()) {                    request.addMarker("cache-hit-expired");                    request.setCacheEntry(entry);                    mNetworkQueue.put(request);                    continue;                }                // We have a cache hit; parse its data for delivery back to the request.                request.addMarker("cache-hit");                Response<?> response = request.parseNetworkResponse(                        new NetworkResponse(entry.data, entry.responseHeaders));                request.addMarker("cache-hit-parsed");                if (!entry.refreshNeeded()) {                    // Completely unexpired cache hit. Just deliver the response.                    mDelivery.postResponse(request, response);                } else {                    // Soft-expired cache hit. We can deliver the cached response,                    // but we need to also send the request to the network for                    // refreshing.                    request.addMarker("cache-hit-refresh-needed");                    request.setCacheEntry(entry);                    // Mark the response as intermediate.                    response.intermediate = true;                    // Post the intermediate response back to the user and have                    // the delivery then forward the request along to the network.                    mDelivery.postResponse(request, response, new Runnable() {                        @Override                        public void run() {                            try {                                mNetworkQueue.put(request);                            } catch (InterruptedException e) {                                // Not much we can do about this.                            }                        }                    });                }            } catch (InterruptedException e) {                // We may have been interrupted because it was time to quit.                if (mQuit) {                    return;                }                continue;            }        }    }}

以上就是CacheDispatcher中的 run 方法了,不难看出就是一个线程,首先在一个while(true)的循环中无限读取缓存队列中的请求,首先判断是否被取消(是否对该请求进行处理),试图从缓存中寻找该请求响应(不存在则会加入至网络请求队列中),如果该缓存过期则重新加入至网络请求,最后就是数据命中(缓存中有数据)将数据解析给我们。可以看出作为一个内存缓存阶段Volley

给予多次判断并确认是否有必要进行网络请求处理。


下面就看看网络请求时如果

public class NetworkDispatcher extends Thread {    /** The queue of requests to service. */    private final BlockingQueue<Request<?>> mQueue;........................@Override    public void run() {        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);        while (true) {            long startTimeMs = SystemClock.elapsedRealtime();            Request<?> request;            try {                // Take a request from the queue.                request = mQueue.take();            } catch (InterruptedException e) {                // We may have been interrupted because it was time to quit.                if (mQuit) {                    return;                }                continue;            }            try {                request.addMarker("network-queue-take");                // If the request was cancelled already, do not perform the                // network request.                if (request.isCanceled()) {                    request.finish("network-discard-cancelled");                    continue;                }                addTrafficStatsTag(request);                // Perform the network request.                NetworkResponse networkResponse = mNetwork.performRequest(request);                request.addMarker("network-http-complete");                // If the server returned 304 AND we delivered a response already,                // we're done -- don't deliver a second identical response.                if (networkResponse.notModified && request.hasHadResponseDelivered()) {                    request.finish("not-modified");                    continue;                }                // Parse the response here on the worker thread.                Response<?> response = request.parseNetworkResponse(networkResponse);                request.addMarker("network-parse-complete");                // Write to cache if applicable.                // TODO: Only update cache metadata instead of entire record for 304s.                if (request.shouldCache() && response.cacheEntry != null) {                    mCache.put(request.getCacheKey(), response.cacheEntry);                    request.addMarker("network-cache-written");                }                // Post the response back.                request.markDelivered();                mDelivery.postResponse(request, response);            } catch (VolleyError volleyError) {                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);                parseAndDeliverNetworkError(request, volleyError);            } catch (Exception e) {                VolleyLog.e(e, "Unhandled exception %s", e.toString());                VolleyError volleyError = new VolleyError(e);                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);                mDelivery.postError(request, volleyError);            }        }    }
和CacheThread一样也是while(true)这样的无限循环方式。

首先也是从队列中取出下一个请求,并对该请求状态做出相应处理:是否被取消,增加一个活跃标签,开始网络请求。

parseNetworkResponse是对服务器响应的数据进行解析,这里调用该方法的接口mNetwork。在Volley类中可以看到实现该接口的

是一个BasicNetwork(stack)的类,以下是在该类中实现的方法:

@Override    public NetworkResponse performRequest(Request<?> request) throws VolleyError {        long requestStart = SystemClock.elapsedRealtime();        while (true) {            HttpResponse httpResponse = null;            byte[] responseContents = null;            Map<String, String> responseHeaders = Collections.emptyMap();            try {                // Gather headers.                Map<String, String> headers = new HashMap<String, String>();                addCacheHeaders(headers, request.getCacheEntry());                httpResponse = mHttpStack.performRequest(request, headers);                StatusLine statusLine = httpResponse.getStatusLine();                int statusCode = statusLine.getStatusCode();                responseHeaders = convertHeaders(httpResponse.getAllHeaders());                // Handle cache validation.                if (statusCode == HttpStatus.SC_NOT_MODIFIED) {                    Entry entry = request.getCacheEntry();                    if (entry == null) {                        return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null,                                responseHeaders, true,                                SystemClock.elapsedRealtime() - requestStart);                    }                    // A HTTP 304 response does not have all header fields. We                    // have to use the header fields from the cache entry plus                    // the new ones from the response.                    // http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5                    entry.responseHeaders.putAll(responseHeaders);                    return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data,                            entry.responseHeaders, true,                            SystemClock.elapsedRealtime() - requestStart);                }                // Some responses such as 204s do not have content.  We must check.                if (httpResponse.getEntity() != null) {                  responseContents = entityToBytes(httpResponse.getEntity());                } else {                  // Add 0 byte response as a way of honestly representing a                  // no-content request.                  responseContents = new byte[0];                }                // if the request is slow, log it.                long requestLifetime = SystemClock.elapsedRealtime() - requestStart;                logSlowRequests(requestLifetime, request, responseContents, statusLine);                if (statusCode < 200 || statusCode > 299) {                    throw new IOException();                }                return new NetworkResponse(statusCode, responseContents, responseHeaders, false,                        SystemClock.elapsedRealtime() - requestStart);            } catch (SocketTimeoutException e) {                attemptRetryOnException("socket", request, new TimeoutError());            } catch (ConnectTimeoutException e) {                attemptRetryOnException("connection", request, new TimeoutError());            } catch (MalformedURLException e) {                throw new RuntimeException("Bad URL " + request.getUrl(), e);            } catch (IOException e) {                int statusCode = 0;                NetworkResponse networkResponse = null;                if (httpResponse != null) {                    statusCode = httpResponse.getStatusLine().getStatusCode();                } else {                    throw new NoConnectionError(e);                }                VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());                if (responseContents != null) {                    networkResponse = new NetworkResponse(statusCode, responseContents,                            responseHeaders, false, SystemClock.elapsedRealtime() - requestStart);                    if (statusCode == HttpStatus.SC_UNAUTHORIZED ||                            statusCode == HttpStatus.SC_FORBIDDEN) {                        attemptRetryOnException("auth",                                request, new AuthFailureError(networkResponse));                    } else {                        // TODO: Only throw ServerError for 5xx status codes.                        throw new ServerError(networkResponse);                    }                } else {                    throw new NetworkError(networkResponse);                }            }        }    }
可以看到网络的细节都在这里,其中主要的httpResponse是通过传入的mHttpStack对象通过其接口实现类的的方法

perforRequest实现的,而传入的这个mHttpStack就是根据版本分别选择的HttpClient和HttpURLConnection的自定义请求类(他们也

分别实现了perforRequest接口,所以可以使用),发送请求后在parseNetworkResponse方法中进行解析成自己想要的数据类型,这

也是我们自定义请求中需要实现方法。

解析完成后,就对其进行缓存处理,并通过mDelivery.postResponse(request,response)或

mDelivery.postError(request,volleyError)来进行数据回调:

postResponse:

    @Override    public void postResponse(Request<?> request, Response<?> response) {        postResponse(request, response, null);    }    @Override    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {        request.markDelivered();        request.addMarker("post-response");        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));    }
ResponseDeliveryRunnable:
    @SuppressWarnings("rawtypes")    private class ResponseDeliveryRunnable implements Runnable {        private final Request mRequest;        private final Response mResponse;        private final Runnable mRunnable;        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {            mRequest = request;            mResponse = response;            mRunnable = runnable;        }        @SuppressWarnings("unchecked")        @Override        public void run() {            // If this request has canceled, finish it and don't deliver.            if (mRequest.isCanceled()) {                mRequest.finish("canceled-at-delivery");                return;            }            // Deliver a normal response or error, depending.            if (mResponse.isSuccess()) {                mRequest.deliverResponse(mResponse.result);            } else {                mRequest.deliverError(mResponse.error);            }            // If this is an intermediate response, add a marker, otherwise we're done            // and the request can be finished.            if (mResponse.intermediate) {                mRequest.addMarker("intermediate-response");            } else {                mRequest.finish("done");            }            // If we have been provided a post-delivery runnable, run it.            if (mRunnable != null) {                mRunnable.run();            }       }    }
在多方跳转可以看到底层,这里可以看到回调的接口mRequest.deliverResponse将数据进行回调至主线程。
以上就是volley运作的大致过程了,通过缓存机制进行列队的网络请求。




0 0
原创粉丝点击