13 java.util.HashMap

来源:互联网 发布:淘宝网苹果手机价格 编辑:程序博客网 时间:2024/06/10 01:08

HashMap

                                                            2015.01.12&13                                                            By 970655147

备注 : “[” 和”]”之间的内容是 我添加的


基于哈希表的 Map 接口的实现。此实现提供所有可选的映射操作,并允许使用 null 值和 null 键。(除了非同步和允许使用 null 之外,HashMap 类与 Hashtable 大致相同。)此类不保证映射的顺序,特别是它不保证该顺序恒久不变。

此实现假定哈希函数将元素适当地分布在各桶之间,可为基本操作(get 和 put)提供稳定的性能。迭代 collection 视图所需的时间与 HashMap 实例的“容量”(桶的数量)及其大小(键-值映射关系数)成比例。所以,如果迭代性能很重要,则不要将初始容量设置得太高(或将加载因子设置得太低)。

HashMap 的实例有两个参数影响其性能:初始容量 和加载因子。容量 是哈希表中桶的数量,初始容量只是哈希表在创建时的容量。加载因子 是哈希表在其容量自动增加之前可以达到多满的一种尺度。当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表进行 rehash 操作(即重建内部数据结构),从而哈希表将具有大约两倍的桶数。

通常,默认加载因子 (.75) 在时间和空间成本上寻求一种折衷。加载因子过高虽然减少了空间开销,但同时也增加了查询成本(在大多数 HashMap 类的操作中,包括 get 和 put 操作,都反映了这一点)。在设置初始容量时应该考虑到映射中所需的条目数及其加载因子,以便最大限度地减少 rehash 操作次数。如果初始容量大于最大条目数除以加载因子,则不会发生 rehash 操作。

如果很多映射关系要存储在 HashMap 实例中,则相对于按需执行自动的 rehash 操作以增大表的容量来说,使用足够大的初始容量创建它将使得映射关系能更有效地存储。 [配置初始容量以及loadFactor, 提高效率]

注意,此实现不是同步的。如果多个线程同时访问一个哈希映射,而其中至少一个线程从结构上修改了该映射,则它必须 保持外部同步。(结构上的修改是指添加或删除一个或多个映射关系的任何操作;仅改变与实例已经包含的键关联的值不是结构上的修改。)这一般通过对自然封装该映射的对象进行同步操作来完成。如果不存在这样的对象,则应该使用 Collections.synchronizedMap 方法来“包装”该映射。最好在创建时完成这一操作,以防止对映射进行意外的非同步访问,如下所示:

Map m = Collections.synchronizedMap(new HashMap(…));
由所有此类的“collection 视图方法”所返回的迭代器都是快速失败 的:在迭代器创建之后,如果从结构上对映射进行修改,除非通过迭代器本身的 remove 方法,其他任何时间任何方式的修改,迭代器都将抛出 ConcurrentModificationException。因此,面对并发的修改,迭代器很快就会完全失败,而不冒在将来不确定的时间发生任意不确定行为的风险。

注意,迭代器的快速失败行为不能得到保证,一般来说,存在非同步的并发修改时,不可能作出任何坚决的保证。快速失败迭代器尽最大努力抛出 ConcurrentModificationException。因此,编写依赖于此异常的程序的做法是错误的,正确做法是:迭代器的快速失败行为应该仅用于检测程序错误。

此类是 Java Collections Framework 的成员。

start
->

声明

/** * Hash table based implementation of the <tt>Map</tt> interface.  This * implementation provides all of the optional map operations, and permits * <tt>null</tt> values and the <tt>null</tt> key.  (The <tt>HashMap</tt> * class is roughly equivalent to <tt>Hashtable</tt>, except that it is * unsynchronized and permits nulls.)  This class makes no guarantees as to * the order of the map; in particular, it does not guarantee that the order * will remain constant over time. * * <p>This implementation provides constant-time performance for the basic * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function * disperses the elements properly among the buckets.  Iteration over * collection views requires time proportional to the "capacity" of the * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number * of key-value mappings).  Thus, it's very important not to set the initial * capacity too high (or the load factor too low) if iteration performance is * important. * * <p>An instance of <tt>HashMap</tt> has two parameters that affect its * performance: <i>initial capacity</i> and <i>load factor</i>.  The * <i>capacity</i> is the number of buckets in the hash table, and the initial * capacity is simply the capacity at the time the hash table is created.  The * <i>load factor</i> is a measure of how full the hash table is allowed to * get before its capacity is automatically increased.  When the number of * entries in the hash table exceeds the product of the load factor and the * current capacity, the hash table is <i>rehashed</i> (that is, internal data * structures are rebuilt) so that the hash table has approximately twice the * number of buckets. * * <p>As a general rule, the default load factor (.75) offers a good tradeoff * between time and space costs.  Higher values decrease the space overhead * but increase the lookup cost (reflected in most of the operations of the * <tt>HashMap</tt> class, including <tt>get</tt> and <tt>put</tt>).  The * expected number of entries in the map and its load factor should be taken * into account when setting its initial capacity, so as to minimize the * number of rehash operations.  If the initial capacity is greater * than the maximum number of entries divided by the load factor, no * rehash operations will ever occur. * * <p>If many mappings are to be stored in a <tt>HashMap</tt> instance, * creating it with a sufficiently large capacity will allow the mappings to * be stored more efficiently than letting it perform automatic rehashing as * needed to grow the table. * * <p><strong>Note that this implementation is not synchronized.</strong> * If multiple threads access a hash map concurrently, and at least one of * the threads modifies the map structurally, it <i>must</i> be * synchronized externally.  (A structural modification is any operation * that adds or deletes one or more mappings; merely changing the value * associated with a key that an instance already contains is not a * structural modification.)  This is typically accomplished by * synchronizing on some object that naturally encapsulates the map. * * If no such object exists, the map should be "wrapped" using the * {@link Collections#synchronizedMap Collections.synchronizedMap} * method.  This is best done at creation time, to prevent accidental * unsynchronized access to the map:<pre> *   Map m = Collections.synchronizedMap(new HashMap(...));</pre> * * <p>The iterators returned by all of this class's "collection view methods" * are <i>fail-fast</i>: if the map is structurally modified at any time after * the iterator is created, in any way except through the iterator's own * <tt>remove</tt> method, the iterator will throw a * {@link ConcurrentModificationException}.  Thus, in the face of concurrent * modification, the iterator fails quickly and cleanly, rather than risking * arbitrary, non-deterministic behavior at an undetermined time in the * future. * * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed * as it is, generally speaking, impossible to make any hard guarantees in the * presence of unsynchronized concurrent modification.  Fail-fast iterators * throw <tt>ConcurrentModificationException</tt> on a best-effort basis. * Therefore, it would be wrong to write a program that depended on this * exception for its correctness: <i>the fail-fast behavior of iterators * should be used only to detect bugs.</i> * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @param <K> the type of keys maintained by this map * @param <V> the type of mapped values * * @author  Doug Lea * @author  Josh Bloch * @author  Arthur van Hoff * @author  Neal Gafter * @see     Object#hashCode() * @see     Collection * @see     Map * @see     TreeMap * @see     Hashtable * @since   1.2 */public class HashMap<K,V>    extends AbstractMap<K,V>    implements Map<K,V>, Cloneable, Serializable

HashMap. 属性

    // 默认的capacity    // 最大capacity    // 默认的装载因子    // 存储数据的table    // emptyTable    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; //aka16    static final int MAXIMUM_CAPACITY = 1 << 30;    static final float DEFAULT_LOAD_FACTOR = 0.75f;    static final Entry<?,?>[] EMPTY_TABLE = {};    transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;    // 有多少条数据    // threshold    // 装载因子    // 修改的计数器    // 默认的Threshold   // 一个随机的与计算key的hashCode相关的避免hashCollision    transient int size;    int threshold;    final float loadFactor;    transient int modCount;    static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;    transient int hashSeed = 0;

HashMap. HashMap()

public HashMap(int initialCapacity, float loadFactor) {// 校验initialCapacity, 并初始化loadFactor, threshold// init初始化    if (initialCapacity < 0)        throw new IllegalArgumentException("Illegal initial capacity: " +                                           initialCapacity);    if (initialCapacity > MAXIMUM_CAPACITY)        initialCapacity = MAXIMUM_CAPACITY;    if (loadFactor <= 0 || Float.isNaN(loadFactor))        throw new IllegalArgumentException("Illegal load factor: " +                                           loadFactor);    this.loadFactor = loadFactor;    threshold = initialCapacity;    init();}public HashMap(int initialCapacity) {    this(initialCapacity, DEFAULT_LOAD_FACTOR);}public HashMap() {    this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);}public HashMap(Map<? extends K, ? extends V> m) {// 初始化table, 将m添加到当前Map中    this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,                  DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);    inflateTable(threshold);    putAllForCreate(m);}

HashMap.put(K key, V value)

public V put(K key, V value) {    // 如果存储数据的table为EMPTY_TABLE  初始化table以及其他属性        // 如果key==null  putForNullKey        // 根据hashCode查到对应的table的index            // 检查原来的table[index]中是否存在该key对应的value [如果 hashCode相同 并且(key.equals(e.key) 或者引用相同)    则视e的key 和key[param]相同]                // 如果存在 替换掉旧值  并执行更新元素之后的回调        // 如果不存在 添加新entry        if (table == EMPTY_TABLE) {            inflateTable(threshold);        }        if (key == null)            return putForNullKey(value);        int hash = hash(key);        int i = indexFor(hash, table.length);        for (Entry<K,V> e = table[i]; e != null; e = e.next) {            Object k;            if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {                V oldValue = e.value;                e.value = value;                e.recordAccess(this);                return oldValue;            }        }        modCount++;        addEntry(hash, key, value, i);        return null;    }HashMap. inflateTable(int toSize)    private void inflateTable(int toSize) {        // Find a power of 2 >= toSize        // 计算初始的capacity        // 计算threshold  为table分配空间        // 如果有必要的话初始化hashSeed        int capacity = roundUpToPowerOf2(toSize);        threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);        table = new Entry[capacity];        initHashSeedAsNeeded(capacity);    }HashMap. roundUpToPowerOf2(int number)private static int roundUpToPowerOf2(int number) {    // 如果number大于MAXIMUM_CAPACITY  结果为MAXIMUM_CAPACITY        // 否则  如果rounded为0, 返回1            // 否则  如果number的二进制表示中1的个数大于1个  则返回比number大的最小的2^n次方的数                // 否则 表示number为2的整数次幂  返回rounded        // assert number >= 0 : "number must be non-negative";        int rounded = number >= MAXIMUM_CAPACITY                ? MAXIMUM_CAPACITY                : (rounded = Integer.highestOneBit(number)) != 0                    ? (Integer.bitCount(number) > 1) ? rounded<< 1 : rounded : 1;        // 返回rounded 是比number大的最小的2^n次方的数        return rounded;    }HashMap. putForNullKey(V value)private V putForNullKey(V value) {    // 查找table[0]中现在是否存在null的key            // 如果有则替换掉原有的值            // 没有 则添加新的entry        for (Entry<K,V> e = table[0]; e != null; e = e.next) {            if (e.key == null) {                V oldValue = e.value;                e.value = value;                e.recordAccess(this);                return oldValue;            }        }        modCount++;        addEntry(0, null, value, 0);        return null;    }HashMap$Entry.recordAccess(HashMap<K,V> m)        /**         * This method is invoked whenever the value in an entry is         * overwritten by an invocation of put(k,v) for a key k that's already         * in the HashMap.         */        // 默认不做任何事情, LinkedHashMap实现于[如果是以访问的顺序遍历集合中的数据, 则将刚刚访问的数据添加到集合的尾部]        void recordAccess(HashMap<K,V> m) {        }HashMap. addEntry(int hash, K key, V value, int bucketIndex)void addEntry(int hash, K key, V value, int bucketIndex) {    // 如果size大于了阈值,并且table[bucketIndex]!=null 扩容            // 默认的扩容弧度为扩大1倍        // 将kvPair添加进Map中        if ((size >= threshold) && (null != table[bucketIndex])) {            resize(2 * table.length);            hash = (null != key) ? hash(key) : 0;            bucketIndex = indexFor(hash, table.length);        }        createEntry(hash, key, value, bucketIndex);    }HashMap. createEntry(int hash, K key, V value, int bucketIndex)void createEntry(int hash, K key, V value, int bucketIndex) {    // 获取当前table[bucketIndex](list结构)的head        // 新建一个entry 其next指向e 并将该entry设置为table[bucketIndex]        // 更新size        Entry<K,V> e = table[bucketIndex];        table[bucketIndex] = new Entry<>(hash, key, value, e);        size++;    }HashMap resize(int newCapacity)void resize(int newCapacity) {    // 如果tab的长度到达了MAXIMUM_CAPACITY, 更新threshold  直接返回        // 为table分配新的空间// 改变原来的table中的数据到新的tab        // 更新table  重新计算阈值        Entry[] oldTable = table;        int oldCapacity = oldTable.length;        if (oldCapacity == MAXIMUM_CAPACITY) {            threshold = Integer.MAX_VALUE;            return;        }        Entry[] newTable = new Entry[newCapacity];        transfer(newTable, initHashSeedAsNeeded(newCapacity));        table = newTable;        threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);    }HashMap. transfer(Entry[] newTable, boolean rehash)    // 将本HashMap中的数据复制到newTable中 会根据newTable的长度重新计算每个元素的bucket        // 改变原来的table中的数据的指向[会从新分bucket]            // 如果rehash为true 重新计算hashCode            // 设置原bucket中的数据指向新的table[i]    void transfer(Entry[] newTable, boolean rehash) {        int newCapacity = newTable.length;        for (Entry<K,V> e : table) {            while(null != e) {                Entry<K,V> next = e.next;                if (rehash) {                    e.hash = null == e.key ? 0 : hash(e.key);                }                int i = indexFor(e.hash, newCapacity);                e.next = newTable[i];                newTable[i] = e;                e = next;            }        }    }HashMap. initHashSeedAsNeeded(int capacity)    final boolean initHashSeedAsNeeded(int capacity) {        boolean currentAltHashing = hashSeed != 0;        // 异或  两个值不相等为真        // (如果hashSeed==0 && useAltHashing为真) 或者(如果hashSeed!=0 && useAltHashing为假)  计算hashSeed            // 如果useAltHashing  随机计算hash                // 否则  令seed为0        boolean useAltHashing = sun.misc.VM.isBooted() && (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);        boolean switching = currentAltHashing ^ useAltHashing;        if (switching) {            hashSeed = useAltHashing                    // 随机的计算一个hashSeed                ? sun.misc.Hashing.randomHashSeed(this)                : 0;        }        return switching;}

1 添加元素
这里写图片描述

2 更新元素
这里写图片描述

HashMap. putAll(Map

    public void putAll(Map<? extends K, ? extends V> m) {        int numKeysToBeAdded = m.size();        // 如果m中没有数据 直接返回        // 如果table未初始化 初始化table以及相关信息        // 将m的每一个entry放入当前map        if (numKeysToBeAdded == 0)            return;        if (table == EMPTY_TABLE) {            inflateTable((int) Math.max(numKeysToBeAdded * loadFactor, threshold));        }        /*         * Expand the map if the map if the number of mappings to be added         * is greater than or equal to threshold.  This is conservative; the         * obvious condition is (m.size() + size) >= threshold, but this         * condition could result in a map with twice the appropriate capacity,         * if the keys to be added overlap with the keys already in this map.         * By using the conservative calculation, we subject ourself         * to at most one extra resize.         */        // 如果 m.size大于threshold  则根据需要扩充空间             // targetCapacity初始化为能够装下numKeysToBeAdded个元素的空间, 如果当前容量小于targetCapacity, 则直接扩充至大于targetCapacity的最小2的整数次幂的大小        // 这里的扩容机制是为了  防止添加的m中含有大量的元素, 导致多次扩容, 导致效率问题        // 之所以 不使用”m.size() + size”作为扩容之后的总容量  来确定扩容之后的容量的原因在于, 可能当前Map 和m可能存在大量的重复key的元素, 那么就导致了当前Map中tab的容量过大  对于数量级太大的话, 对于迭代元素来说, 是比较浪费时间的        if (numKeysToBeAdded > threshold) {            int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);            if (targetCapacity > MAXIMUM_CAPACITY)                targetCapacity = MAXIMUM_CAPACITY;            int newCapacity = table.length;            while (newCapacity < targetCapacity)                newCapacity <<= 1;            if (newCapacity > table.length)                resize(newCapacity);        }        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())            put(e.getKey(), e.getValue());    }

这里写图片描述

HashMap. get(Object key)

public V get(Object key) {    // 如果key==null  getForNullKey        // 否则  如果根据key找到的entry为null 则返回null 否则返回对应的值        if (key == null)            return getForNullKey();        Entry<K,V> entry = getEntry(key);        return null == entry ? null : entry.getValue();    }HashMap. getForNullKey()private V getForNullKey() {        // 如果当前map中没有元素 返回null        // 依次从每一个table[0]中找 看是否存在key为null的value        // null为key的kvPair存放在table[0]中        if (size == 0) {            return null;        }        for (Entry<K,V> e = table[0]; e != null; e = e.next) {            if (e.key == null)                return e.value;        }        return null;    }HashMap. getEntry(Object key)final Entry<K,V> getEntry(Object key) {        // 如果当前map没有元素, 返回null        // 计算key的hashCode        // 查找table [hashCode%capacity]            // 如果 hashCode相同 并且(key.equals(e.key) 或者引用相同)    则视e的key 和key[param]相同        if (size == 0) {            return null;        }        int hash = (key == null) ? 0 : hash(key);        for (Entry<K,V> e = table[indexFor(hash, table.length)];             e != null;             e = e.next) {            Object k;            if (e.hash == hash &&                ((k = e.key) == key || (key != null && key.equals(k))))                return e;        }        return null;    }

这里写图片描述

HashMap. containsKey(Object key)

public boolean containsKey(Object key) {    // 查找到的entry不为空 则视为包含该键 否则视为不包含        return getEntry(key) != null;    }

HsahMap. containsValue(Object value)

public boolean containsValue(Object value) {    // 判断是否存在null  containsNullValue        // 否则 遍历每一个bucket[n] 查找是否存在对应的value  equals判定        if (value == null)            return containsNullValue();        Entry[] tab = table;        for (int i = 0; i < tab.length ; i++)            for (Entry e = tab[i] ; e != null ; e = e.next)                if (value.equals(e.value))                    return true;        return false;    }HsahMap. containsNullValue()    private boolean containsNullValue() {        Entry[] tab = table;        // 遍历bucket[0]的每一个元素 查找是否存在值为null的元素        for (int i = 0; i < tab.length ; i++)            for (Entry e = tab[i] ; e != null ; e = e.next)                if (e.value == null)                    return true;        return false;    }

HashMap. remove(Object key)

public V remove(Object key) {    // 移除指定键对应的值 并返回其值        Entry<K,V> e = removeEntryForKey(key);        return (e == null ? null : e.value);}HashMap. removeEntryForKey(Object key)    final Entry<K,V> removeEntryForKey(Object key) {        // 如果没有元素  返回null        // 很据hashCode计算bucketId        // 从该bucket[tablei]中查找是否有该key  如果存在, 则删除该条目  并执行删除元素之后的回调操作            // 判定键是否相同  hashCode相同 并且(key.equals(e.key) 或者引用相同)        // 如果没有找到返回null[e迭代到最后会是null]               if (size == 0) {            return null;        }        int hash = (key == null) ? 0 : hash(key);        int i = indexFor(hash, table.length);        Entry<K,V> prev = table[i];        Entry<K,V> e = prev;        while (e != null) {            Entry<K,V> next = e.next;            Object k;            if (e.hash == hash &&                ((k = e.key) == key || (key != null && key.equals(k)))) {                modCount++;                size--;                // 链表的删除元素操作                // 第一个元素 与之后的元素处理方式不同 [无头结点链表]                if (prev == e)                    table[i] = next;                else                    prev.next = next;                e.recordRemoval(this);                return e;            }            prev = e;            e = next;        }        return e;    }HashMap$Entry. recordRemoval(HashMap<K,V> m)        void recordRemoval(HashMap<K,V> m) {            // 删除元素之后的特别操作, 默认不做任何事情                // 对于LinkedHashMap, 需要更新前一个元素 和后一个元素的before, after引用        }

这里写图片描述

HashMap. hash(Object k)

    final int hash(Object k) {        int h = hashSeed;        // 单独计算String的hashCode            // 否则 XXXX        if (0 != h && k instanceof String) {            return sun.misc.Hashing.stringHash32((String) k);        }        h ^= k.hashCode();        // This function ensures that hashCodes that differ only by        // constant multiples at each bit position have a bounded        // number of collisions (approximately 8 at default load factor).        h ^= (h >>> 20) ^ (h >>> 12);        return h ^ (h >>> 7) ^ (h >>> 4);    }

HashMap. indexFor(int h, int length)

    static int indexFor(int h, int length) {        // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";        // 根据hashCode计算hashCode对应的key对应的索引        // 返回hashCode和长度的模        return h & (length-1);    }

HashMap.size()

    public int size() {        return size;    }

HashMap.isEmpty()

    public boolean isEmpty() {        return size == 0;    }

HsahMap.clear()

public void clear() {        // 将table的每一个元素[链表头]置为空    // 更新modCount, size        modCount++;        Arrays.fill(table, null);        size = 0;    }Arrays.fill(Object[] a, Object val)    public static void fill(Object[] a, Object val) {    // 将a中每一个元素设置为val        for (int i = 0, len = a.length; i < len; i++)            a[i] = val;    }

HsahMap. capacity()

    int capacity() {         return table.length;     }

HsahMap. loadFactor()

    float loadFactor() {        return loadFactor;    }

HsahMap.clone()

public Object clone() {        // super.clone()        // 为clone的对象分配存储数据的空间        // 将本map中的所有数据填充到result中        HashMap<K,V> result = null;        try {            result = (HashMap<K,V>)super.clone();        } catch (CloneNotSupportedException e) {            // assert false;        }        if (result.table != EMPTY_TABLE) {            result.inflateTable(Math.min(                (int) Math.min(                    size * Math.min(1 / loadFactor, 4.0f),                    // we have limits...                    HashMap.MAXIMUM_CAPACITY),               table.length));        }        result.entrySet = null;        result.modCount = 0;        result.size = 0;        result.init();        result.putAllForCreate(this);        return result;    }HashMap.init()void init() {    // 默认不做任何事情    // LinkedList中创建了一个默认的头结点  用于索引集合添加的元素    }HashMap. putAllForCreate(Map<? extends K, ? extends V> m)    private void putAllForCreate(Map<? extends K, ? extends V> m) {        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())            // 放入m中所有的entry            putForCreate(e.getKey(), e.getValue());    }HashMap. putForCreate(K key, V value)private void putForCreate(K key, V value) {    // 计算key对应的bucketId            // 如果map中存在对应的key 则替换掉旧值        // 因为这里之前确保了容量 并没有使用addEntry        // 添加一个新的kvPair        int hash = null == key ? 0 : hash(key);        int i = indexFor(hash, table.length);        /**         * Look for preexisting entry for key.  This will never happen for         * clone or deserialize.  It will only happen for construction if the         * input Map is a sorted map whose ordering is inconsistent w/ equals.         */        for (Entry<K,V> e = table[i]; e != null; e = e.next) {            Object k;            if (e.hash == hash &&                ((k = e.key) == key || (key != null && key.equals(k)))) {                e.value = value;                return;            }        }        createEntry(hash, key, value, i);    }

Entry[InnerClass]

static class Entry<K,V> implements Map.Entry<K,V> {    // key, value, next, hash        final K key;        V value;        Entry<K,V> next;        int hash;        /**         * Creates new entry.         */        Entry(int h, K k, V v, Entry<K,V> n) {            value = v;            next = n;            key = k;            hash = h;        }        public final K getKey() {            return key;        }        public final V getValue() {            return value;        }        // setValue 返回旧值        public final V setValue(V newValue) {            V oldValue = value;            value = newValue;            return oldValue;        }        // equals方法 用于给定entry删除entry 好像        public final boolean equals(Object o) {            // 如果key==o.key 或者 key.equals(o.eky)                // 并且value==o. value或者 value.equals(o. value)                // 返回true            if (!(o instanceof Map.Entry))                return false;            Map.Entry e = (Map.Entry)o;            Object k1 = getKey();            Object k2 = e.getKey();            if (k1 == k2 || (k1 != null && k1.equals(k2))) {                Object v1 = getValue();                Object v2 = e.getValue();                if (v1 == v2 || (v1 != null && v1.equals(v2)))                    return true;            }            return false;        }        public final int hashCode() {、            // Objects.hashCode(obj)            return Objects.hashCode(getKey()) ^ Objects.hashCode(getValue());        }        public final String toString() {            return getKey() + "=" + getValue();        }        /**         * This method is invoked whenever the value in an entry is         * overwritten by an invocation of put(k,v) for a key k that's already         * in the HashMap.         */        void recordAccess(HashMap<K,V> m) {        }        /**         * This method is invoked whenever the entry is         * removed from the table.         */        void recordRemoval(HashMap<K,V> m) {        }}

HashIterator[InnerClass]

private abstract class HashIterator<E> implements Iterator<E> {    // 下一个需要返回的entry, 创建迭代器的时候当前集合的modCount    // 当前的桶的索引, 上一个next方法 返回的元素        Entry<K,V> next;        // next entry to return        int expectedModCount;   // For fast-fail        int index;              // current slot        Entry<K,V> current;     // current entry        HashIterator() {            expectedModCount = modCount;            if (size > 0) { // advance to first entry                Entry[] t = table;                while (index < t.length && (next = t[index++]) == null)                    ;            }        }        public final boolean hasNext() {            return next != null;        }        final Entry<K,V> nextEntry() {            // 并发修改检查,元素有效性检查            // 如果e.next==null 即current为table[index]的最后一个元素, next = table[index+1]            // 返回current元素            if (modCount != expectedModCount)                throw new ConcurrentModificationException();            Entry<K,V> e = next;            if (e == null)                throw new NoSuchElementException();            if ((next = e.next) == null) {                Entry[] t = table;                while (index < t.length && (next = t[index++]) == null)                    ;            }            current = e;            return e;        }        public void remove() {            // 并发修改检查,元素有效性检查            // 删除当前entry            if (current == null)                throw new IllegalStateException();            if (modCount != expectedModCount)                throw new ConcurrentModificationException();            Object k = current.key;            current = null;            HashMap.this.removeEntryForKey(k);            expectedModCount = modCount;        }    }HashMap. removeEntryForKey(Object key)final Entry<K,V> removeEntryForKey(Object key) {    // 如果没有元素  返回null        // 很据hashCode计算bucketId        // 从该bucket[tablei]中查找是否有该key  如果存在, 则删除该条目            // 判定键是否相同  hashCode相同 并且(key.equals(e.key) 或者引用相同)        // 如果没有找到返回null[e迭代到最后会是null]        if (size == 0) {            return null;        }        int hash = (key == null) ? 0 : hash(key);        int i = indexFor(hash, table.length);        Entry<K,V> prev = table[i];        Entry<K,V> e = prev;        while (e != null) {            Entry<K,V> next = e.next;            Object k;            if (e.hash == hash &&                ((k = e.key) == key || (key != null && key.equals(k)))) {                modCount++;                size--;                // 链表的删除元素操作                // 第一个元素 与之后的元素处理方式不同 [无头结点链表]                if (prev == e)                    table[i] = next;                else                    prev.next = next;                e.recordRemoval(this);                return e;            }            prev = e;            e = next;        }        return e;    }

这里写图片描述

HashMap. newEntryIterator()

    Iterator<Map.Entry<K,V>> newEntryIterator()   {        return new EntryIterator();}EntryIterator[InnerClass]    private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {        public Map.Entry<K,V> next() {            return nextEntry();        }    }

HashMap. newKeyIterator()

    Iterator<K> newKeyIterator()   {        return new KeyIterator();}KeyIterator[InnerClass]    private final class KeyIterator extends HashIterator<K> {        public K next() {            // 重写next方法            return nextEntry().getKey();        }    }

HashMap. newValueIterator()

    Iterator<V> newValueIterator()   {        return new ValueIterator();    }ValueIterator[InnerClass]    private final class ValueIterator extends HashIterator<V> {        // 重写next方法        public V next() {            return nextEntry().value;        }    }

HashMap. entrySet()

    public Set<Map.Entry<K,V>> entrySet() {        return entrySet0();    }HashMap. entrySet0()private Set<Map.Entry<K,V>> entrySet0() {// 返回entrySet       如果其不存在则新建        Set<Map.Entry<K,V>> es = entrySet;        return es != null ? es : (entrySet = new EntrySet());    }    private transient Set<Map.Entry<K,V>> entrySet = null;

EntrySet[InnerClass]

private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {// 业务基本上委托给了外部类对象        public Iterator<Map.Entry<K,V>> iterator() {            // newEntryIterator            return newEntryIterator();        }        public boolean contains(Object o) {            if (!(o instanceof Map.Entry))                return false;            Map.Entry<K,V> e = (Map.Entry<K,V>) o;            Entry<K,V> candidate = getEntry(e.getKey());            return candidate != null && candidate.equals(e);        }        public boolean remove(Object o) {            return removeMapping(o) != null;        }        public int size() {            return size;        }        public void clear() {            HashMap.this.clear();        }}

HashMap.keySet()

public Set<K> keySet() {    // 返回keySet     如果其不存在则新建        Set<K> ks = keySet;        return (ks != null ? ks : (keySet = new KeySet()));    }    transient volatile Set<K>        keySet = null;

KeySet[InnerClass]

private final class KeySet extends AbstractSet<K> {// 业务基本上委托给了外部类对象        public Iterator<K> iterator() {            // newKeyIterator            return newKeyIterator();        }        public int size() {            return size;        }        public boolean contains(Object o) {            return containsKey(o);        }        public boolean remove(Object o) {            return HashMap.this.removeEntryForKey(o) != null;        }        public void clear() {            HashMap.this.clear();        }    }

HashMap. values ()

public Collection<V> values() {// 返回values     如果其不存在则新建        Collection<V> vs = values;        return (vs != null ? vs : (values = new Values()));    }    transient volatile Collection<V> values = null;

Values[InnerClass]

private final class Values extends AbstractCollection<V> {// 业务基本上委托给了外部类对象        public Iterator<V> iterator() {            // newValueIterator            return newValueIterator();        }        public int size() {            return size;        }        public boolean contains(Object o) {            return containsValue(o);        }        public void clear() {            HashMap.this.clear();        }    }

->
end

难点主要在于 : 对于这个数据结构的存取方式的理解, 如果理解了就好了, 了解其查询效率快的原因, 以及扩容之后的重新散列

java.util.HashtableHashMap的区别在于前者的业务方法[或者访问了多线程共享域的委托的方法]均有synchronized关键字 [线程安全]
还有一点在于 Hashtable**不能存放null key条目** [会抛出NullPointerException]

java.util.HashSetHashMap : 前者 是基于后者的, HashSet中维护了一个HashMap的实例, 该HashMap的每一个Entry< key, value > 中value均为 初始化HashSet.class 的时候创建的一个PRESENT[new Object()]
使用该HashMap的所有的key来存储数据 [利用了HashMap的key唯一性保证了HashSet中元素的唯一性]

0 0
原创粉丝点击