Android 源码系列之<八>从源码的角度深入理解缓存策略之LruCache

来源:互联网 发布:自然语言处理算法 编辑:程序博客网 时间:2024/05/21 17:45

        转载请注明出处:http://blog.csdn.net/llew2011/article/details/51668397

        在Android开发中缓存技术应用的十分广泛,我们最长见的是对图片进行缓存毕竟图片很耗内存的,目前比较著名的图片加载库比如Android-Universal-Image-Loader等都使用了缓存技术。缓存可分为三级,可分别表述为内存缓存、硬盘缓存和网络缓存,他们的加载顺序一般都是内存缓存→硬盘缓存→网络缓存。为了便于使用内存缓存,Google在其V4包中给我们提供了LruCache类,该类很重要,在面试的过程中也会经常会问到。今天我们就从源码的角度来深入理解LruCache类。如果你对LruCache非常熟悉,恭喜你可以忽略本文了(*^__^*) ……

        开始讲解LruCache之前,我们先了解一下什么是最近最少使用算法,最近最少使用算法的缩写为LRU(Least Recently Used),它通俗的讲就是通过一定规则把过去最长的一段时间内没有被使用的对象置换掉。Java已经提供了该算法的实现:LinkedHashMap,LinkedHashMap是HashMap的子类,它相对于HashMap来讲是有序的,并且能够按照访问顺序进行排序,我们先看一个例子:

public class Test {public static void main(String[] args) {LinkedHashMap<String, String> params1 = new LinkedHashMap<>(16, 0.75f, false);LinkedHashMap<String, String> params2 = new LinkedHashMap<>(16, 0.75f, true);for(int i = 10; i < 21; i++) {params1.put("key:" + i, "value:" + i);params2.put("key:" + i, "value:" + i);}params1.get("key:12");params1.get("key:11");params1.put("key:21", "value:21");params1.put("key:18", "value:18");params2.get("key:12");params2.get("key:11");params2.put("key:21", "value:21");params2.put("key:18", "value:18");print(params1);print(params2);}static void print(Map<String, String> params) {Iterator<Entry<String, String>> iterator = params.entrySet().iterator();while(iterator.hasNext()) {Entry<String, String> entry = iterator.next();System.out.println(entry.getKey() + "   " + entry.getValue());}}}
        从代码看params1和params2中的唯一不同之处就在于创建的时候传递的第三个参数一个为false,一个为true,这个boolean参数起什么作用呢?我们运行一下,看看结果再说,运行结果如下图所示:

        根据打印结果我们发现params1和params2的打印结果是不相同的,params2的打印结果是把访问过的元素按访问照顺序排列在了尾部,而params1还是原来的顺序。为什么会不相同呢?唯一的解释就是构建的对象的时候是参数true和false在起作用,那在LinkedHashMap的构造方法中第三个参数究竟是什么意思呢?我们看一下该构造方法的说明:

        LinkedHashMap中的三个参数:

  • initialCapacity表示该集合的容量大小(注意:此容量并非真正的容量,而是2ⁿ>=initialCapacity,假如initialCapacity值为10,那么该集合的真正容量就是16,其它依次类推)。
  • loadFactor表示增长因子(默认值为0.75),当存放的数量超过了集合容量乘以loadFactor的值时,这个时候集合就会进行扩容操作,集合扩容后的大小为之前容量的2倍。
  • accessOrder参数为boolean类型,true表示按照访问顺序进行排序(从最少访问的顺序到最多访问的顺序进行排序,注:访问操作就是put(),get()操作),false表示的是按照插入顺序进行排序。

        看完了LinkedHashMap的参数说明对于刚刚打印的结果也就豁然开朗了,原来当accessOrder为true时在每次访问该map中的元素后都会把该元素添加到队列的尾部。如果你想更详细的了解它的实现过程,请自行查阅源码,这里不在过多的讲解。之所以要说一下LinkedHashMap是因为这和我们今天要讲解的LruCache有很紧密的联系,因为LruCache的核心就是LinkedHashMap,下面我们来看一下LruCache吧。

        之前在博文中说过阅读源码看注释很重要,我们先阅读一下LruCache的注释,如下所示:

/** *  * A cache that holds strong references to a limited number of values. Each time a value is accessed, it is moved to the head of a queue. When a value is added * to a full cache, the value at the end of that queue is evicted and may become eligible for garbage collection. *  * <p> * If your cached values hold resources that need to be explicitly released, override {@link #entryRemoved}. *  * <p> * If a cache miss should be computed on demand for the corresponding keys, override {@link #create}. This simplifies the calling code, allowing it to assume a * value will always be returned, even when there's a cache miss. *  * <p> * By default, the cache size is measured in the number of entries. Override {@link #sizeOf} to size the cache in different units. For example, this cache is * limited to 4MiB of bitmaps: *  * <pre> * {@code *   int cacheSize = 4 * 1024 * 1024; // 4MiB *   LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) { *       protected int sizeOf(String key, Bitmap value) { *           return value.getByteCount(); *       } *   }} * </pre> *  * <p> * This class is thread-safe. Perform multiple cache operations atomically by synchronizing on the cache: *  * <pre> * {@code *   synchronized (cache) { *     if (cache.get(key) == null) { *         cache.put(key, value); *     } *   }} * </pre> *  * <p> * This class does not allow null to be used as a key or value. A return value of null from {@link #get}, {@link #put} or  * {@link #remove} is unambiguous: the key was not in the cache. *  * <p> * This class appeared in Android 3.1 (Honeycomb MR1); it's available as part of Android's Support Package for earlier releases. */

        简单翻译一下,如下所示:

        缓存集合持有一个强引用来限制数量,每次当一个值被访问的时候,该值就会被移动到队列的头部。当一个新值被添加到一个已经满了的缓存中时,这个队列中的尾部的值就会被移除,被移除的值就会被垃圾回收器回收。

        如果你的缓存值为系统资源并且需要明确的释放该值,那么你需要重写entryRemoved()方法来释放该资源。

        如果key对应的value值被丢掉了你应该重写create()方法,这简化了代码调用,即使被丢掉了也总会有返回值。

        在默认情况下缓存的大小为装载对象的数量,可以通过重写sizeOf()方法来计算每一个缓存实例在不同的情形下的大小。例如缓存4M大小的图片:

        该类是线程安全的,通过对cache进行线程同步自动的执行多个缓存操作:

        该类不允许key或者是value为空,如果通过get(K),put(K, V)或者是remove(K)得到的返回值为null那就说明key对应的值不在缓存中。

        该类出现在Android 3.1版本(Honeycomb MR1)中,它作为独立的support的一部分在早期的版本中也是可以使用的。

        了解了注解之后,LruCache的源码如下:

public class LruCache<K, V> {private final LinkedHashMap<K, V> map;// LruCache的核心private int size;// 当前缓存大小private int maxSize;// 最大缓存大小private int putCount;// 添加到缓存中的个数private int createCount;// 创建个数private int evictionCount;// 删除个数private int hitCount;// 命中个数private int missCount;// 丢失个数/** * @param maxSize *            for caches that do not override {@link #sizeOf}, this is the maximum number of entries in the cache. For all other caches, this is the maximum *            sum of the sizes of the entries in this cache. */public LruCache(int maxSize) {if (maxSize <= 0) {throw new IllegalArgumentException("maxSize <= 0");}this.maxSize = maxSize;this.map = new LinkedHashMap<K, V>(0, 0.75f, true);}/** * Sets the size of the cache. *  * @param maxSize *            The new maximum size. *  * @hide */public void resize(int maxSize) {if (maxSize <= 0) {throw new IllegalArgumentException("maxSize <= 0");}synchronized (this) {this.maxSize = maxSize;}trimToSize(maxSize);}/** * Returns the value for {@code key} if it exists in the cache or can be created by {@code #create}. If a value was returned, it is moved to the head of the * queue. This returns null if a value is not cached and cannot be created. */public final V get(K key) {if (key == null) {throw new NullPointerException("key == null");}V mapValue;synchronized (this) {mapValue = map.get(key);if (mapValue != null) {hitCount++;return mapValue;}missCount++;}/* * Attempt to create a value. This may take a long time, and the map may be different when create() returns. If a conflicting value was added to the map * while create() was working, we leave that value in the map and release the created value. */V createdValue = create(key);if (createdValue == null) {return null;}synchronized (this) {createCount++;mapValue = map.put(key, createdValue);if (mapValue != null) {// There was a conflict so undo that last putmap.put(key, mapValue);} else {size += safeSizeOf(key, createdValue);}}if (mapValue != null) {entryRemoved(false, key, createdValue, mapValue);return mapValue;} else {trimToSize(maxSize);return createdValue;}}/** * Caches {@code value} for {@code key}. The value is moved to the head of the queue. *  * @return the previous value mapped by {@code key}. */public final V put(K key, V value) {if (key == null || value == null) {return null;}V previous;synchronized (this) {putCount++;size += safeSizeOf(key, value);previous = map.put(key, value);if (previous != null) {size -= safeSizeOf(key, previous);}}if (previous != null) {entryRemoved(false, key, previous, value);}trimToSize(maxSize);return previous;}/** * @param maxSize *            the maximum size of the cache before returning. May be -1 to evict even 0-sized elements. */private void trimToSize(int maxSize) {while (true) {K key;V value;synchronized (this) {if (size < 0 || (map.isEmpty() && size != 0)) {throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!");}if (size <= maxSize) {break;}Map.Entry<K, V> toEvict = null;for (Map.Entry<K, V> entry : map.entrySet()) {toEvict = entry;}if (toEvict == null) {break;}key = toEvict.getKey();value = toEvict.getValue();map.remove(key);size -= safeSizeOf(key, value);evictionCount++;}entryRemoved(true, key, value, null);}}/** * Removes the entry for {@code key} if it exists. *  * @return the previous value mapped by {@code key}. */public final V remove(K key) {if (key == null) {throw new NullPointerException("key == null");}V previous;synchronized (this) {previous = map.remove(key);if (previous != null) {size -= safeSizeOf(key, previous);}}if (previous != null) {entryRemoved(false, key, previous, null);}return previous;}/** * Called for entries that have been evicted or removed. This method is invoked when a value is evicted to make space, removed by a call to {@link #remove}, * or replaced by a call to {@link #put}. The default implementation does nothing. *  * <p> * The method is called without synchronization: other threads may access the cache while this method is executing. *  * @param evicted *            true if the entry is being removed to make space, false if the removal was caused by a {@link #put} or {@link #remove}. * @param newValue *            the new value for {@code key}, if it exists. If non-null, this removal was caused by a {@link #put}. Otherwise it was caused by an eviction or *            a {@link #remove}. */protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}/** * Called after a cache miss to compute a value for the corresponding key. Returns the computed value or null if no value can be computed. The default * implementation returns null. *  * <p> * The method is called without synchronization: other threads may access the cache while this method is executing. *  * <p> * If a value for {@code key} exists in the cache when this method returns, the created value will be released with {@link #entryRemoved} and discarded. * This can occur when multiple threads request the same key at the same time (causing multiple values to be created), or when one thread calls {@link #put} * while another is creating a value for the same key. */protected V create(K key) {return null;}private int safeSizeOf(K key, V value) {int result = sizeOf(key, value);if (result < 0) {throw new IllegalStateException("Negative size: " + key + "=" + value);}return result;}/** * Returns the size of the entry for {@code key} and {@code value} in user-defined units. The default implementation returns 1 so that size is the number of * entries and max size is the maximum number of entries. *  * <p> * An entry's size must not change while it is in the cache. */protected int sizeOf(K key, V value) {return 1;}/** * Clear the cache, calling {@link #entryRemoved} on each removed entry. */public final void evictAll() {trimToSize(-1); // -1 will evict 0-sized elements}/** * For caches that do not override {@link #sizeOf}, this returns the number of entries in the cache. For all other caches, this returns the sum of the sizes * of the entries in this cache. */public synchronized final int size() {return size;}/** * For caches that do not override {@link #sizeOf}, this returns the maximum number of entries in the cache. For all other caches, this returns the maximum * sum of the sizes of the entries in this cache. */public synchronized final int maxSize() {return maxSize;}/** * Returns the number of times {@link #get} returned a value that was already present in the cache. */public synchronized final int hitCount() {return hitCount;}/** * Returns the number of times {@link #get} returned null or required a new value to be created. */public synchronized final int missCount() {return missCount;}/** * Returns the number of times {@link #create(Object)} returned a value. */public synchronized final int createCount() {return createCount;}/** * Returns the number of times {@link #put} was called. */public synchronized final int putCount() {return putCount;}/** * Returns the number of values that have been evicted. */public synchronized final int evictionCount() {return evictionCount;}/** * Returns a copy of the current contents of the cache, ordered from least recently accessed to most recently accessed. */public synchronized final Map<K, V> snapshot() {return new LinkedHashMap<K, V>(map);}@Overridepublic synchronized final String toString() {int accesses = hitCount + missCount;int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]", maxSize, hitCount, missCount, hitPercent);}}
        LruCache中定义了8个属性,其中的核心是LinkedHashMap类型的map属性,其它int类型的属性是辅助控制作用,注释说的很清楚就不在一一详述了。

        了解了LruCache的属性之后,再来看一下其构造方法,代码如下:

public LruCache(int maxSize) {if (maxSize <= 0) {throw new IllegalArgumentException("maxSize <= 0");}this.maxSize = maxSize;this.map = new LinkedHashMap<K, V>(0, 0.75f, true);}
        LruCache的构造方法中包含一个int类型的参数maxSize,该参数表示当前缓存容量的最大值,该容量既可以表示缓存元素的个数还可以表示缓存的内存大小等。
        缓存操作的规则无外乎就是put(K, V)和get(K),LruCache也是符合这种规则的,我们先看LruCache的put(K, V)操作:
public final V put(K key, V value) {if (key == null || value == null) {return null;}V previous;synchronized (this) {putCount++;size += safeSizeOf(key, value);previous = map.put(key, value);if (previous != null) {size -= safeSizeOf(key, previous);}}if (previous != null) {entryRemoved(false, key, previous, value);}trimToSize(maxSize);return previous;}
        LruCache的put(K, V)操作中先对K和V进行非null判断,如果有一项为null则就直接返回null,否则对当前对象加锁保证线程同步,接着更新putCount的值;然后对属性size进行增加,增加量为safeSizeOf(K, V)的返回值,safeSizeOf()方法又调用了sizeOf()方法,sizeOf()方法默认返回1,因此在默认情况下size增加量为1;然后把LruCache的put()操作委托给了LinkedHashMap的put(K, V)方法,这时候如果map中对应的key中有值,则会重新更新size的值然后调用entryRemoved()方法通知外界做了删除操作,最后调用trimToSize()方法检查缓存是否越界。

        我们来看一下trimToSize()方法是如何检查缓存是否越界的,代码如下:

private void trimToSize(int maxSize) {while (true) {K key;V value;synchronized (this) {if (size < 0 || (map.isEmpty() && size != 0)) {throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!");}if (size <= maxSize) {break;}Map.Entry<K, V> toEvict = null;for (Map.Entry<K, V> entry : map.entrySet()) {toEvict = entry;}if (toEvict == null) {break;}key = toEvict.getKey();value = toEvict.getValue();map.remove(key);size -= safeSizeOf(key, value);evictionCount++;}entryRemoved(true, key, value, null);}}
        trimToSize()方法是一个while死循环,正常条件下跳出循环的条件是size <= maxSize也就是缓存没有越界。如果缓存越界就会取map中的最后一个元素,若该元素非空就把其删除并更新size和evictionCount的值,最后调用entryRemoved()方法通知外界缓存做了删除操作。

        清楚了LruCache的put操作流程,那接下来看一下LruCahe的get(K)操作,代码如下:

public final V get(K key) {if (key == null) {throw new NullPointerException("key == null");}V mapValue;synchronized (this) {mapValue = map.get(key);if (mapValue != null) {hitCount++;return mapValue;}missCount++;}/* * Attempt to create a value. This may take a long time, and the map may be different when create() returns. If a conflicting value was added to the map * while create() was working, we leave that value in the map and release the created value. */V createdValue = create(key);if (createdValue == null) {return null;}synchronized (this) {createCount++;mapValue = map.put(key, createdValue);if (mapValue != null) {// There was a conflict so undo that last putmap.put(key, mapValue);} else {size += safeSizeOf(key, createdValue);}}if (mapValue != null) {entryRemoved(false, key, createdValue, mapValue);return mapValue;} else {trimToSize(maxSize);return createdValue;}}
        在get()方法中先对key进行非空判断,若key为null就抛异常否则就通过map获取key对应的值并赋值给mapValue,若mapValue非空就更新hitCount值并返回,否则更新missCount的值,接下来调用create(K)方法把其返回值赋值给cratedValue,假如createdValue为null则就直接返回null,否则更新createCount的值并把新创建的createdValue存储进map中并把返回值赋值给mapValue,当mapValue非null表示有冲突,此时就再次把mapValue存储进map中否则就跟新size的值,最后根据mapValue是否为null做操作,如果mapValue为null就调用trimToSize()方法来检测缓存是否越界。

        清楚了LruCache的get(K)操作后我们再看一下其remove(K)操作,代码如下:

public final V remove(K key) {if (key == null) {throw new NullPointerException("key == null");}V previous;synchronized (this) {previous = map.remove(key);if (previous != null) {size -= safeSizeOf(key, previous);}}if (previous != null) {entryRemoved(false, key, previous, null);}return previous;}
        LruCache的remove()操作和前边的put(),get()的操作一样先是对K或者V做非空判断,因为LruCache中不允许K或V为null,在K符合条件后调用map的remove()操作并把返回值赋值给previous,如果previous非null则更新size的值并调用entryRemoved()方法通知外界缓存做了删除操作,最后返回previous。

        通过对LruCache的put()、get()以及remove()方法的解读我们已经清楚了LruCache操作流程,其核心就是通过size和maxSize来控制缓存的容量,把超过容量的对象从缓存中删除掉,我们在使用LruCache的时候可以重写entryRemoved(),sizeOf(),create()等方法来进行更为精确的控制,比如若要对图片进行缓存,我们可以如下操作:

int maxSize = (int) (Runtime.getRuntime().maxMemory() / 8);LruCache<String, Bitmap> bitmapCaches = new LruCache<String, Bitmap>(maxSize) {@Overrideprotected int sizeOf(String key, Bitmap value) {return value.getRowBytes() * value.getHeight();}@Overrideprotected void entryRemoved(boolean evicted, String key, Bitmap oldBitmap, Bitmap newBitmap) {// 对oldBitmap做回收操作}@Overrideprotected Bitmap create(String key) {Bitmap bitmap = null;// 创建bitmap...return bitmap;}};
        LruCache有关的put(),get()以及remove()等解读完了,根据解读我们可以总结如下:
  • LruCache是基于LRU算法实现的一种缓存机制,它的核心是LinkedHashMap
  • LruCache的原理是在容量达到上限后把最近最少使用的对象从缓存中移除
  • LruCache并没有真正的释放内存,仅仅是把对象从缓存中移除
  • LruCache是线程安全的,无论是get(),put()还是remove()操作都能保证线程安全

        好了,到这里有关LruCache的讲解就告一段落了,感谢收看(*^__^*) ……





0 0
原创粉丝点击