Volley学习篇----源码走读

来源:互联网 发布:学术讲座海报 大数据 编辑:程序博客网 时间:2024/06/05 08:59

1.进入Volley类中的对外接口newReuqestQueue

/**     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.     *     * @param context A {@link Context} to use for creating the cache dir.     * @param stack An {@link HttpStack} to use for the network, or null for default.     * @return A started {@link RequestQueue} instance.     */    public static RequestQueue newRequestQueue(Context context, HttpStack stack) {        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);        String userAgent = "volley/0";        try {            String packageName = context.getPackageName();            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);            userAgent = packageName + "/" + info.versionCode;        } catch (NameNotFoundException e) {        }        if (stack == null) {            if (Build.VERSION.SDK_INT >= 9) {                stack = new HurlStack();            } else {                // Prior to Gingerbread, HttpUrlConnection was unreliable.                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));            }        }        Network network = new BasicNetwork(stack);        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);        queue.start();                Log.i(DEFAULT_CACHE_DIR, "vivo volley version: " + Constants.VERSION);        return queue;    }
做了三件事:创建硬盘缓存目录,用于缓存网络响应;创建网络请求接口HttpStack,分为两种类型HttpClientStack和HurlStack(原因后面再说[0]),并借由它再创建一个网络响应接口BasicNetwork;创建一个响应队列RequestQueue并将上述网络响应接口传入,最后启动网络交互。

2.接下来分析两类HttpStack:HttpClientStack和HurlStack。根据Android版本分别使用两种类型的HttpStack,原因如http://blog.csdn.net/guolin_blog/article/details/12452307所述,简单说2.2及以下版本的HttpClient的API更多,bug少且稳定,2.3及以后的版本android团队集中精力优化了HttpUrlConnection,使得之后HttpUrlConnection使用更加方便,性能优异。鉴于此,以下只分析HurlStack,进入其实现的performRequest:

     public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)            throws IOException, AuthFailureError {        String url = request.getUrl();        HashMap<String, String> map = new HashMap<String, String>();        map.putAll(request.getHeaders());        map.putAll(additionalHeaders);        if (mUrlRewriter != null) {            String rewritten = mUrlRewriter.rewriteUrl(url);            if (rewritten == null) {                throw new IOException("URL blocked by rewriter: " + url);            }            url = rewritten;        }        URL parsedUrl = new URL(url);        HttpURLConnection connection = openConnection(parsedUrl, request);        for (String headerName : map.keySet()) {            connection.addRequestProperty(headerName, map.get(headerName));        }        setConnectionParametersForRequest(connection, request);        // Initialize HttpResponse with data from the HttpURLConnection.        ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1);        int responseCode = connection.getResponseCode();        if (responseCode == -1) {            // -1 is returned by getResponseCode() if the response code could not be retrieved.            // Signal to the caller that something was wrong with the connection.            throw new IOException("Could not retrieve response code from HttpUrlConnection.");        }        StatusLine responseStatus = new BasicStatusLine(protocolVersion,                connection.getResponseCode(), connection.getResponseMessage());        BasicHttpResponse response = new BasicHttpResponse(responseStatus);        if (hasResponseBody(request.getMethod(), responseStatus.getStatusCode())) {            response.setEntity(entityFromConnection(connection));        }        for (Entry<String, List<String>> header : connection.getHeaderFields().entrySet()) {            if (header.getKey() != null) {                Header h = new BasicHeader(header.getKey(), header.getValue().get(0));                response.addHeader(h);            }        }        return response;    }
上述代码就是HttpUrlConnection的实际应用,将request发送出去并接收服务器响应来创建一个HttpResponse对象返回。可以想象,HttpClientStack里面也是做了同样的事情,只是调用的方式不同而已。通过上述的HttpStack对象,创建了一个BasicNetwork对象作为网络响应的接口,进入该类的performRequest方法中:

public NetworkResponse performRequest(Request<?> request) throws VolleyError {        long requestStart = SystemClock.elapsedRealtime();        while (true) {            HttpResponse httpResponse = null;            byte[] responseContents = null;            Map<String, String> responseHeaders = Collections.emptyMap();            try {                // Gather headers.                Map<String, String> headers = new HashMap<String, String>();                addCacheHeaders(headers, request.getCacheEntry());                httpResponse = mHttpStack.performRequest(request, headers);                StatusLine statusLine = httpResponse.getStatusLine();                int statusCode = statusLine.getStatusCode();                responseHeaders = convertHeaders(httpResponse.getAllHeaders());                // Handle cache validation.                if (statusCode == HttpStatus.SC_NOT_MODIFIED) {                    Entry entry = request.getCacheEntry();                    if (entry == null) {                        return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null,                                responseHeaders, true,                                SystemClock.elapsedRealtime() - requestStart);                    }                    // A HTTP 304 response does not have all header fields. We                    // have to use the header fields from the cache entry plus                    // the new ones from the response.                    // http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5                    entry.responseHeaders.putAll(responseHeaders);                    return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data,                            entry.responseHeaders, true,                            SystemClock.elapsedRealtime() - requestStart);                }                // Some responses such as 204s do not have content.  We must check.                if (httpResponse.getEntity() != null) {                  responseContents = entityToBytes(httpResponse.getEntity());                } else {                  // Add 0 byte response as a way of honestly representing a                  // no-content request.                  responseContents = new byte[0];                }                // if the request is slow, log it.                long requestLifetime = SystemClock.elapsedRealtime() - requestStart;                logSlowRequests(requestLifetime, request, responseContents, statusLine);                if (statusCode < 200 || statusCode > 299) {                    throw new IOException();                }                return new NetworkResponse(statusCode, responseContents, responseHeaders, false,                        SystemClock.elapsedRealtime() - requestStart);            } catch (SocketTimeoutException e) {                attemptRetryOnException("socket", request, new TimeoutError());            } catch (ConnectTimeoutException e) {                attemptRetryOnException("connection", request, new TimeoutError());            } catch (MalformedURLException e) {                throw new RuntimeException("Bad URL " + request.getUrl(), e);            } catch (IOException e) {                int statusCode;                if (httpResponse != null) {                    statusCode = httpResponse.getStatusLine().getStatusCode();                } else {                    throw new NoConnectionError(e);                }                VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());                NetworkResponse networkResponse;                if (responseContents != null) {                    networkResponse = new NetworkResponse(statusCode, responseContents,                            responseHeaders, false, SystemClock.elapsedRealtime() - requestStart);                    if (statusCode == HttpStatus.SC_UNAUTHORIZED ||                            statusCode == HttpStatus.SC_FORBIDDEN) {                        attemptRetryOnException("auth",                                request, new AuthFailureError(networkResponse));                    } else if (statusCode >= 400 && statusCode <= 499) {                        // Don't retry other client errors.                        throw new ClientError(networkResponse);                    } else if (statusCode >= 500 && statusCode <= 599) {                        if (request.shouldRetryServerErrors()) {                            attemptRetryOnException("server",                                    request, new ServerError(networkResponse));                        } else {                            throw new ServerError(networkResponse);                        }                    } else {                        // 3xx? No reason to retry.                        throw new ServerError(networkResponse);                    }                } else {                    attemptRetryOnException("network", request, new NetworkError());                }            }        }    }
这里面实际上是调用了传入的HttpStack中的performRequest方法,获得一个HttpResponse对象,并且对该对象做处理,主要是三步需要注意:错误处理、重试机制、更新缓存。错误处理比较简单就不赘述。重试机制:抛出异常
if (statusCode < 200 || statusCode > 299) {     throw new IOException();}
并在捕获异常时进行处理--调用attemptRetryOnException方法(后面再分析[1])。更新缓存:判断状态码为304代表本地缓存资源依然有效,无需再次请求。


3.最后进入RequestQueue类。首先看他的构造方法:

    /**     * Creates the worker pool. Processing will not begin until {@link #start()} is called.     *     * @param cache A Cache to use for persisting responses to disk     * @param network A Network interface for performing HTTP requests     * @param threadPoolSize Number of network dispatcher threads to create     * @param delivery A ResponseDelivery interface for posting responses and errors     */    public RequestQueue(Cache cache, Network network, int threadPoolSize,            ResponseDelivery delivery) {        mCache = cache;        mNetwork = network;        mDispatchers = new NetworkDispatcher[threadPoolSize];        mDelivery = delivery;    }    /**     * Creates the worker pool. Processing will not begin until {@link #start()} is called.     *     * @param cache A Cache to use for persisting responses to disk     * @param network A Network interface for performing HTTP requests     * @param threadPoolSize Number of network dispatcher threads to create     */    public RequestQueue(Cache cache, Network network, int threadPoolSize) {        this(cache, network, threadPoolSize,                new ExecutorDelivery(new Handler(Looper.getMainLooper())));    }    /**     * Creates the worker pool. Processing will not begin until {@link #start()} is called.     *     * @param cache A Cache to use for persisting responses to disk     * @param network A Network interface for performing HTTP requests     */    public RequestQueue(Cache cache, Network network) {        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);    }
一共包含四个参数,前两个是传入的参数不做分析,mDispatchers是初始化中构造的一个NetworkDispatcher队列,大小默认为4,代表最多同时进行四个网络交互线程,这个NetworkDispatcher后面解释[2]。mDelivery是一个网络响应结果分发器接口[后续介绍3],也是在初始化的时候创建的。

然后进入RequestQueue.start()方法:

    /**     * Starts the dispatchers in this queue.     */    public void start() {        stop();  // Make sure any currently running dispatchers are stopped.        // Create the cache dispatcher and start it.        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);        mCacheDispatcher.start();        // Create network dispatchers (and corresponding threads) up to the pool size.        for (int i = 0; i < mDispatchers.length; i++) {            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,                    mCache, mDelivery);            mDispatchers[i] = networkDispatcher;            networkDispatcher.start();        }    }
这里首先停止所有已存在的工作线程,然后重新创建一个缓存调度器(实际上是继承于Thread,用于执行网络交互响应的工作线程),启动该缓存工作线程,然后启动NetworkDispatcher队列中的所有网络调度器工作。最后!!!进入CacheDispatcher缓存调度器中,看看到底做了什么:

    public void run() {        if (DEBUG) VolleyLog.v("start new dispatcher");        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);        // Make a blocking call to initialize the cache.        mCache.initialize();        while (true) {            try {                // Get a request from the cache triage queue, blocking until                // at least one is available.                final Request<?> request = mCacheQueue.take();                request.addMarker("cache-queue-take");                // If the request has been canceled, don't bother dispatching it.                if (request.isCanceled()) {                    request.finish("cache-discard-canceled");                    continue;                }                // Attempt to retrieve this item from cache.                Cache.Entry entry = mCache.get(request.getCacheKey());                if (entry == null) {                    request.addMarker("cache-miss");                    // Cache miss; send off to the network dispatcher.                    mNetworkQueue.put(request);                    continue;                }                // If it is completely expired, just send it to the network.                if (entry.isExpired()) {                    request.addMarker("cache-hit-expired");                    request.setCacheEntry(entry);                    mNetworkQueue.put(request);                    continue;                }                // We have a cache hit; parse its data for delivery back to the request.                request.addMarker("cache-hit");                Response<?> response = request.parseNetworkResponse(                        new NetworkResponse(entry.data, entry.responseHeaders));                request.addMarker("cache-hit-parsed");                if (!entry.refreshNeeded()) {                    // Completely unexpired cache hit. Just deliver the response.                    mDelivery.postResponse(request, response);                } else {                    // Soft-expired cache hit. We can deliver the cached response,                    // but we need to also send the request to the network for                    // refreshing.                    request.addMarker("cache-hit-refresh-needed");                    request.setCacheEntry(entry);                    // Mark the response as intermediate.                    response.intermediate = true;                    // Post the intermediate response back to the user and have                    // the delivery then forward the request along to the network.                    mDelivery.postResponse(request, response, new Runnable() {                        @Override                        public void run() {                            try {                                mNetworkQueue.put(request);                            } catch (InterruptedException e) {                                // Not much we can do about this.                            }                        }                    });                }            } catch (InterruptedException e) {                // We may have been interrupted because it was time to quit.                if (mQuit) {                    return;                }                continue;            }        }    }
这里的工作就很明确了,该类中维护了两个队列mCacheQueue和mNetworkQueue,分别用于保存缓存请求和网络请求。该方法不断从mCacheQueue中取出request,然后从mCache中查看是否存在本地缓存,如果未命中,则直接放入mNetworkQueue,否则继续判断该缓存是否失效,如果失效,则放入mNetworkQueue,否则继续判断缓存是否软失效,如果是,则继续分派该命中的缓存响应并且再请求网络改资源,否则直接分派该命中的缓存响应。这里有两条线:一个是放入mNetworkQueue中的request的走向,一个是被分派出去的request和response的去向。首先是第一条线,再返回RequestQueue的start方法中,这里启动了网络请求分发器,进入NetworkDispatcher看看发生了什么:
    public void run() {        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);        while (true) {            long startTimeMs = SystemClock.elapsedRealtime();            Request<?> request;            try {                // Take a request from the queue.                request = mQueue.take();            } catch (InterruptedException e) {                // We may have been interrupted because it was time to quit.                if (mQuit) {                    return;                }                continue;            }            try {                request.addMarker("network-queue-take");                // If the request was cancelled already, do not perform the                // network request.                if (request.isCanceled()) {                    request.finish("network-discard-cancelled");                    continue;                }                addTrafficStatsTag(request);                // Perform the network request.                NetworkResponse networkResponse = mNetwork.performRequest(request);                request.addMarker("network-http-complete");                // If the server returned 304 AND we delivered a response already,                // we're done -- don't deliver a second identical response.                if (networkResponse.notModified && request.hasHadResponseDelivered()) {                    request.finish("not-modified");                    continue;                }                // Parse the response here on the worker thread.                Response<?> response = request.parseNetworkResponse(networkResponse);                request.addMarker("network-parse-complete");                // Write to cache if applicable.                // TODO: Only update cache metadata instead of entire record for 304s.                if (request.shouldCache() && response.cacheEntry != null) {                    mCache.put(request.getCacheKey(), response.cacheEntry);                    request.addMarker("network-cache-written");                }                // Post the response back.                request.markDelivered();                mDelivery.postResponse(request, response);            } catch (VolleyError volleyError) {                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);                parseAndDeliverNetworkError(request, volleyError);            } catch (Exception e) {                VolleyLog.e(e, "Unhandled exception %s", e.toString());                VolleyError volleyError = new VolleyError(e);                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);                mDelivery.postError(request, volleyError);            }        }    }
这里的是实际的网络请求响应流程:不断的从网络请求队列中取出request,然后通过BasicNetwork网络响应接口performRequest方法,获得网络响应NetworkResponse对象,判断该响应是否需要缓存,如果需要,则缓存到mCache硬盘缓存中,然后通过mDelivery分发器分派该响应。如果捕获了异常,也通过该分发器分发该error响应。

剩下一条线看看做了什么,进入分发器实现类ExecutorDelivery:

/** * Delivers responses and errors. */public class ExecutorDelivery implements ResponseDelivery {    /** Used for posting responses, typically to the main thread. */    private final Executor mResponsePoster;    /**     * Creates a new response delivery interface.     * @param handler {@link Handler} to post responses on     */    public ExecutorDelivery(final Handler handler) {        // Make an Executor that just wraps the handler.        mResponsePoster = new Executor() {            @Override            public void execute(Runnable command) {                handler.post(command);            }        };    }    /**     * Creates a new response delivery interface, mockable version     * for testing.     * @param executor For running delivery tasks     */    public ExecutorDelivery(Executor executor) {        mResponsePoster = executor;    }    @Override    public void postResponse(Request<?> request, Response<?> response) {        postResponse(request, response, null);    }    @Override    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {        request.markDelivered();        request.addMarker("post-response");        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));    }    @Override    public void postError(Request<?> request, VolleyError error) {        request.addMarker("post-error");        Response<?> response = Response.error(error);        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));    }    /**     * A Runnable used for delivering network responses to a listener on the     * main thread.     */    @SuppressWarnings("rawtypes")    private class ResponseDeliveryRunnable implements Runnable {        private final Request mRequest;        private final Response mResponse;        private final Runnable mRunnable;        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {            mRequest = request;            mResponse = response;            mRunnable = runnable;        }        @SuppressWarnings("unchecked")        @Override        public void run() {            // If this request has canceled, finish it and don't deliver.            if (mRequest.isCanceled()) {                mRequest.finish("canceled-at-delivery");                return;            }            // Deliver a normal response or error, depending.            if (mResponse.isSuccess()) {                mRequest.deliverResponse(mResponse.result);            } else {                mRequest.deliverError(mResponse.error);            }            // If this is an intermediate response, add a marker, otherwise we're done            // and the request can be finished.            if (mResponse.intermediate) {                mRequest.addMarker("intermediate-response");            } else {                mRequest.finish("done");            }            // If we have been provided a post-delivery runnable, run it.            if (mRunnable != null) {                mRunnable.run();            }       }    }}
总结一句话就是,该分发器将该响应分派到主线程中,通过Request的具体实现类StringRequest等的deliverResponse方法,将响应分发到Request初始化传入的Listener的对应方法中处理,包括正常的Listener和异常的ErrorListener。
0 0