Volley框架核心源码分析

来源:互联网 发布:学而知不足 编辑:程序博客网 时间:2024/05/16 00:38
Volley是2013谷歌I/O开发者大会上推荐的一个网络通信框架,一般用于项目中进行get post等http请求的处理
优点是快速,使用简单~适合高频率较小数据(获取一些json字符串什么的)的通信,适合android上大部分app的需要
缺点是大数据传输时不是那么高效,下载显示图片可以,但是下载较大的文件就...
网上此类框架其实还是挺多的,okhttp/android-async-http等等,但Volley作为官方推荐的,我们还是应该支持滴

Volley介绍优酷视频:http://v.youku.com/v_show/id_XNzEwMjQzMTI0.html

文档的话,网上木有,我按照源码整了份文档,网页形式的
链接:http://pan.baidu.com/s/1kTuX1SN 密码:m1lu

大会介绍Volley时演示的PDF
链接:http://pan.baidu.com/s/171ege 密码:bi59

用法就不详细介绍了,网上可以搜到
推荐资料http://blog.csdn.net/t12x3456/article/details/9221611


------------------------------------------------------------

下面开始源码介绍
用法基本上是初始化新建个队列,然后把请求add进去


1.初始化时会新建并开始一个请求队列RequestQueue

    /**
     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
     *
     * @param context A {@link Context} to use for creating the cache dir.
     * @param stack An {@link HttpStack} to use for the network, or null for default.
     * @return A started {@link RequestQueue} instance.
     */
    public static RequestQueue newRequestQueue (Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info. versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION. SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();

        return queue;
    }



2.RequestQueue开始start时会在请求队列类里新建并启动几个NetworkDispatcher,相当于一个线程池
同事,RequestQueue创建时,会默认创建一个分发器ExecutorDelivery用于分发错误error或者响应数据response,
ExecutorDelivery内部的核心原理是使用Executor类,通常与主线程挂钩,这里不详细介绍

    /**
     * Starts the dispatchers in this queue.
     */
    public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher( mCacheQueue, mNetworkQueue, mCache , mDelivery );
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers. length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork ,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }



3.NetworkDispathcer继承thread,相当于线程池里的一个线程
     a.运行期间会从请求队列去不停的取一条请求Request
     b.再利用BasicNetwork对象去执行performRequest这条请求,进行网络交互操作,返回NetworkResponse响应对象
     c.将NetworkResponse对象利用Request的parseNetworkResponse解析成自己需要的数据类型Response<?>
     d.利用分发器ResponseDelivery(实际对象是其子类ExecutorDelivery)去发送响应response和错误error
注意
     其中b的内部实现,Build.VERSION.SDK_INT<9时用apache的HttpClient, >=9时用官方的HttpUrlConnection
     (HttpUrlConnection更加推荐, 但低版本时,HttpUrlConnection有部分缺陷)

     其中c的parseNetworkResponse的方法需要自己复写,看需要取解析成图片/字符串/json串解析成的实体类对象等

     mQueue队列对象的类型实际上是PriorityBlockingQueue,看名字就知道,block阻塞型,
     当队列为null取时以及满了以后添加时,都会阻塞,知道可以有东西取,或者可以有空间往里添加
     好处就是为空时等,阻塞在这里,线程里的while(true)就不会一直循环了
     看名字还知道有一个priority即优先级感念,需要队列内的对象继承comparable方法复写优先级比较规则,
     队列是相当于一个线程池的, 加入优先级的概念可以很好的控制各种类型请求的执行优先,
     比如volley中,自己分了4种优先级,一般请求是普通优先级,图片请求的优先级就比较低一点~
     这里就不介绍太详细了


    @Override
    public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        Request<?> request;
        while (true) {
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if ( mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker( "network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish( "network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker( "network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse. notModified && request.hasHadResponseDelivered()) {
                    request.finish( "not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker( "network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response. cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker( "network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog. e(e, "Unhandled exception %s", e.toString());
                mDelivery.postError(request, new VolleyError(e));
            }
        }
    }

以上都是应用一开始就会初始化运行的部分
NetworkDispathcer相当于车间的5台流水线机器,不停的跑啊跑,无论有木有提供的原材料,他都一直机械性的去获取~
获取不到时就卡在那里不动,一旦获取到就继续执行一系列的操作,
最终将原材料(request)生成为产品(response)然后交给分发器发送出去,之后工作就不管了


-----------------------------------------------------------------------------------



而添加请求的操作就需要我们自己手动控制了,在需要的时候添加请求~

添加方法也是在队列类RequestQueue里,为add方法
其中主要方法为mNetworkQueue.add,即往队列里添加一个请求,
这样NetworkDispatcher中阻塞的mQueue.take方法就会继续执行下去了

    /**
     * Adds a Request to the dispatch queue.
     * @param request The request to service
     * @return The passed -in request
     */
    public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue( this);
        synchronized ( mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker( "add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized ( mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if ( mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog. DEBUG) {
                    VolleyLog. v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }



-----------------------------------------------------------------------------------

核心类Request,即请求类,为volley框架的核心类,我们需要自定义操作的也基本是围绕此类进行
个人经验,java类设计中,protected修饰的方法是暴露给子类的
对于框架来说,即希望框架使用者去继承该类时复写该方法,因而大部分框架中protected方法都是空的
需要强制复写的一般还会设计成abstract的抽象类

Request类里需要复写的方法如下(加粗的为abstract)
1.Map<String, String> getParams()
返回post/put请求的提交参数map
2.String getParamsEncoding()
返回post/put请求的提交参数的编码类型,默认为utf-8
3.Response<T> parseNetworkResponse(NetworkResponse)
抽象方法. 解析通过网络工作返回的响应数据,支持泛型,泛型类型与请求Request<T>的泛型一致
4.parseNetworkError(VolleyError)
解析通过网络工作返回的错误信息,大部分错误系统已经封装处理好,
如果需要自定义特殊异常,可以再这里处理
5.deliverResponse(T)
抽象方法. 分发响应数据,需要自定义一个接口实现

可以查看volley自定义的Request子类,如JsonRequest/ImageRequest等


-----------------------------------------------------------------------------------

以上,volley框架基本流程的代码基本介绍完毕~
下面介绍volley框架最大的特色,cache缓存系统
看网页我们都知道,有的网站后退或者再次输入网址等情况下是直接显示缓存数据的
但是android上网络数据请求,一般只有图片会进行数据缓存~

如果我们想除了图片以外的所有数据,比如json数据,也进行缓存处理,那其他框架通常是不提供直接支持的,
而volley框架则采用了一个cache的概念,提供了类似于网页缓存的功能
可以简单的理解为请求过一次的接口,当再次请求时就可以不发送网络请求,直接从本地缓存中获取上次请求返回的数据

http的请求是有自己的缓存处理的,即请求头里的"cache-control"或者"Expires"参数用于控制缓存
简单介绍下,cache-control主要分两种
1.无缓存,no-cache/must-revalidate等,即每次请求都是获取新数据
2.指定缓存时间,max-age=5,值的单位为秒
Expires为指定缓存到指定之间,比如到2012年12月21日...

两种缓存头同时存在时,cache-control将覆盖Expires

Volley中对标题头的处理主要在HttpHeaderParser里的parseCacheHeaders方法
    /**
     * Extracts a {@link Cache.Entry} from a {@link NetworkResponse} .
     *
     * @param response The network response to parse headers from
     * @return a cache entry for the given response, or null if the response is not cacheable.
     */
    public static Cache.Entry parseCacheHeaders(NetworkResponse response) {
        long now = System.currentTimeMillis();

        Map<String, String> headers = response. headers;

        long serverDate = 0;
        long serverExpires = 0;
        long softExpire = 0;
        long maxAge = 0;
        boolean hasCacheControl = false;

        String serverEtag = null;
        String headerValue;

        headerValue = headers.get( "Date");
        if (headerValue != null) {
            serverDate = parseDateAsEpoch(headerValue);
        }

        headerValue = headers.get( "Cache-Control");
        if (headerValue != null) {
            hasCacheControl = true;
            String[] tokens = headerValue.split( ",");
            for ( int i = 0; i < tokens. length; i++) {
                String token = tokens[i].trim();
                if (token.equals( "no-cache") || token.equals("no-store")) {
                    return null;
                } else if (token.startsWith( "max-age=")) {
                    try {
                        maxAge = Long. parseLong(token.substring(8));
                    } catch (Exception e) {
                    }
                } else if (token.equals( "must-revalidate") || token.equals("proxy-revalidate" )) {
                    maxAge = 0;
                }
            }
        }

        headerValue = headers.get( "Expires");
        if (headerValue != null) {
            serverExpires = parseDateAsEpoch(headerValue);
        }

        serverEtag = headers.get( "ETag");

        // Cache-Control takes precedence over an Expires header, even if both exist and Expires
        // is more restrictive.
        if (hasCacheControl) {
            softExpire = now + maxAge * 1000;
        } else if (serverDate > 0 && serverExpires >= serverDate) {
            // Default semantic for Expire header in HTTP specification is softExpire.
            softExpire = now + (serverExpires - serverDate);
        }

        Cache.Entry entry = new Cache.Entry();
        entry. data = response. data;
        entry. etag = serverEtag;
        entry. softTtl = softExpire;
        entry. ttl = entry. softTtl;
        entry. serverDate = serverDate;
        entry. responseHeaders = headers;

        return entry;
    }

方法主要作用是根据不同缓存方式,获取计算出超过缓存时间,即过期时间点的毫秒值,
然后保存到缓存元数据Cache.Entry对象类中

前面代码可以看到,在RequestQueue请求队列开始时,除了NetworkDispather线程池以外,
还建立开启了一个缓存分发器CacheDispater
    @Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher" );
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();
                request.addMarker( "cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish( "cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker( "cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker( "cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker( "cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry. data, entry.responseHeaders));
                request.addMarker( "cache-hit-parsed");

                if (!entry. refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker( "cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response. intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if ( mQuit) {
                    return;
                }
                continue;
            }
        }
    }
和NetworkDispatcher相似,缓存分发器也是一个不停run的thread线程,其中也包含个类型一样的阻塞型队列
区别在于这里要根据请求的cacheKey去缓存池里获取缓存元数据对象,之后还要对其进行过期和刷新判断
        /** True if the entry is expired. */
        public boolean isExpired() {
            return this. ttl < System. currentTimeMillis();
        }

        /** True if a refresh is needed from the original data source. */
        public boolean refreshNeeded () {
            return this. softTtl < System. currentTimeMillis();
        }
ttl和softTtl基本是一模一样的,对应方法也是一样的,将ttl时间和当前时间对比

缓存分发器的循环中,当缓存过期时,则直接将该请求request放入network队列中
如果未过期,则首先将缓存数据解析为Response所需数据,然后再判断是否需要刷新refreshNeeded
若不需要刷新,则直接将解析好的数据发送出去

如果此时方法判断需要刷新了
则在发送response的同时,还要将request添加至network队列中

虽然ttl和softTtl基本是一样的,但是缓存分发器的循环中,两次判断之间还有一个parseNetworkResponse处理
如果是比较大的数据如图片,解析是需要一定时间的,可能性很小但是有几率发生,在解析的这个过程中,
http请求的缓存时间到期了~此时就一边直接使用缓存数据,一边再将请求添加至networkQueue中~


volley中新建时,会创建一个DiskBaseCache保存缓存用,默认最大缓存size为5m,可以自行配置


-----------------------------------------------------------------------------------


volley另一大特色是取消请求功能~
Request请求中含有一个cancel的变量,
队列RequestQueue也提供一个cancelAll方法用于批量取消,内部实质也是修改符合条件Request的cancel变量
cancelAll方法还提供一个简单的过滤器用于取消符合条件的Request

在上面的dispatcher类的run中可以看到,每次从队列中获取request时都会检测isCanceled
如果判断是已经取消了的,则调用Request的finish方法,实质上是调用RequestQueue的finish方法,
将此条Request移除~ 不再进行此条请求的网络通信操作,直接continue获取下一条Request

代码为RequestQueue的finish方法
    /**
     * Called from {@link Request#finish(String)}, indicating that processing of the given request
     * has finished.
     *
     * <p>Releases waiting requests for <code>request.getCacheKey()</code> if
     *      <code>request.shouldCache()</code> .</p>
     */
    void finish (Request<?> request) {
        // Remove from the set of requests currently being processed.
        synchronized ( mCurrentRequests) {
            mCurrentRequests.remove(request);
        }

        if (request.shouldCache()) {
            synchronized ( mWaitingRequests) {
                String cacheKey = request.getCacheKey();
                Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
                if (waitingRequests != null) {
                    if (VolleyLog. DEBUG) {
                        VolleyLog. v("Releasing %d waiting requests for cacheKey=%s.",
                                waitingRequests.size(), cacheKey);
                    }
                    // Process all queued up requests. They won't be considered as in flight, but
                    // that's not a problem as the cache has been primed by 'request'.
                    mCacheQueue.addAll(waitingRequests);
                }
            }
        }
    }

-----------------------------------------------------------------------------------

Retry功能
一般框架也都有,主要用于控制在请求超时的时候自动重新发起请求

作为Volley的最核心类Request,retry功能也是在其中控制的,即setRetry方法中传入一个RetryPolicy对象
不做任何设置时有默认处理,会新建一个DefaultRetryPolicy对象传入setRetry方法,
当然,也可以根据自己需要,调用setRetryPolicy方法,自定义设置配置参数

默认情况是2.5秒超时,retry 1次,超时因子1f

以ImageRequest为例,就在构造方法中自定义设置了retry规则,自己有需要时可以模仿处理,如下
setRetryPolicy(new DefaultRetryPolicy(
     IMAGE_TIMEOUT_MS,  //  超时时间
     IMAGE_MAX_RETRIES,  //  超时后retry次数
     IMAGE_BACKOFF_MULT));// 超时因子

其中超时因子可能让人迷惑,其作用是控制每次超时重新请求时,超时时间的增加速度
由于超时因子的存在,超时时间会随着retry次数的递增,按因子大小指数性增长
如默认情况时,指数为1f,超时时间为2.5秒
则第一次请求超过2.5秒时,超时,进行retry请求
第二次请求时间则变为 2.5 += (2.5 * 因子数1)  即5秒
第三次5 += (5 * 1) 为10秒
...

如果因子改为2,则时间依次是
第一次 2.5秒
第二次 2.5 += (2.5 * 2) 7.5秒
第三次 7.5 += (7.5 * 2) 22.5秒
...

算法如下红色部分
    /**
     * Prepares for the next retry by applying a backoff to the timeout.
     * @param error The error code of the last attempt.
     */
    @Override
    public void retry(VolleyError error) throws VolleyError {
        mCurrentRetryCount++;
        mCurrentTimeoutMs += ( mCurrentTimeoutMs * mBackoffMultiplier);
        if (!hasAttemptRemaining()) {
            throw error;
        }
    }


-----------------------------------------------------------------------------------

此外,volley不同于传统网络通信框架的地方在于对图片异步加载的处理,详细可以参考其他帖子
http://www.eoeandroid.com/thread-334315-1-1.html



0 0
原创粉丝点击