非阻塞同步算法与CAS(Compare and Swap)无锁算法

来源:互联网 发布:ce源码 编辑:程序博客网 时间:2024/05/21 22:27

锁(lock)的代价

锁是用来做并发最简单的方式,当然其代价也是最高的。内核态的锁的时候需要操作系统进行一次上下文切换,加锁、释放锁会导致比较多的上下文切换和调度延时,等待锁的线程会被挂起直至锁释放。在上下文切换的时候,cpu之前缓存的指令和数据都将失效,对性能有很大的损失。操作系统对多线程的锁进行判断就像两姐妹在为一个玩具在争吵,然后操作系统就是能决定他们谁能拿到玩具的父母,这是很慢的。用户态的锁虽然避免了这些问题,但是其实它们只是在没有真实的竞争时才有效。

Java在JDK1.5之前都是靠synchronized关键字保证同步的,这种通过使用一致的锁定协议来协调对共享状态的访问,可以确保无论哪个线程持有守护变量的锁,都采用独占的方式来访问这些变量,如果出现多个线程同时访问锁,那第一些线线程将被挂起,当线程恢复执行时,必须等待其它线程执行完他们的时间片以后才能被调度执行,在挂起和恢复执行过程中存在着很大的开销。锁还存在着其它一些缺点,当一个线程正在等待锁时,它不能做任何事。如果一个线程在持有锁的情况下被延迟执行,那么所有需要这个锁的线程都无法执行下去。如果被阻塞的线程优先级高,而持有锁的线程优先级低,将会导致优先级反转(Priority Inversion)。

乐观锁与悲观锁

独占锁是一种悲观锁,synchronized就是一种独占锁,它假设最坏的情况,并且只有在确保其它线程不会造成干扰的情况下执行,会导致其它所有需要锁的线程挂起,等待持有锁的线程释放锁。而另一个更加有效的锁就是乐观锁。所谓乐观锁就是,每次不加锁而是假设没有冲突而去完成某项操作,如果因为冲突失败就重试,直到成功为止。

volatile的问题

与锁相比,volatile变量是一和更轻量级的同步机制,因为在使用这些变量时不会发生上下文切换和线程调度等操作,但是volatile变量也存在一些局限:不能用于构建原子的复合操作,因此当一个变量依赖旧值时就不能使用volatile变量。(参考:谈谈volatiile)

volatile只能保证变量对各个线程的可见性,但不能保证原子性。为什么?见我的另外一篇文章:《为什么volatile不能保证原子性而Atomic可以?》

Java中的原子操作( atomic operations)

原子操作指的是在一步之内就完成而且不能被中断。原子操作在多线程环境中是线程安全的,无需考虑同步的问题。在java中,下列操作是原子操作:

  • all assignments of primitive types except for long and double
  • all assignments of references
  • all operations of java.concurrent.Atomic* classes
  • all assignments to volatile longs and doubles

问题来了,为什么long型赋值不是原子操作呢?例如:

1
long foo = 65465498L;

实时上java会分两步写入这个long变量,先写32位,再写后32位。这样就线程不安全了。如果改成下面的就线程安全了:

1
private volatile long foo;

因为volatile内部已经做了synchronized.

CAS无锁算法

要实现无锁(lock-free)的非阻塞算法有多种实现方法,其中CAS(比较与交换,Compare and swap)是一种有名的无锁算法。CAS, CPU指令,在大多数处理器架构,包括IA32、Space中采用的都是CAS指令,CAS的语义是“我认为V的值应该为A,如果是,那么将V的值更新为B,否则不修改并告诉V的值实际为多少”,CAS是项乐观锁技术,当多个线程尝试使用CAS同时更新同一个变量时,只有其中一个线程能更新变量的值,而其它线程都失败,失败的线程并不会被挂起,而是被告知这次竞争中失败,并可以再次尝试。CAS有3个操作数,内存值V,旧的预期值A,要修改的新值B。当且仅当预期值A和内存值V相同时,将内存值V修改为B,否则什么都不做。CAS无锁算法的C实现如下:

int compare_and_swap (int* reg, int oldval, int newval){  ATOMIC();  int old_reg_val = *reg;  if (old_reg_val == oldval)     *reg = newval;  END_ATOMIC();  return old_reg_val;}


CAS(乐观锁算法)的基本假设前提

CAS比较与交换的伪代码可以表示为:

do{  
       备份旧数据; 
       基于旧数据构造新数据; 
}while(!CAS( 内存地址,备份的旧数据,新数据 ))  

ConcurrencyCAS 

(上图的解释:CPU去更新一个值,但如果想改的值不再是原来的值,操作就失败,因为很明显,有其它操作先改变了这个值。)

就是指当两者进行比较时,如果相等,则证明共享数据没有被修改,替换成新值,然后继续往下运行;如果不相等,说明共享数据已经被修改,放弃已经所做的操作,然后重新执行刚才的操作。容易看出 CAS 操作是基于共享数据不会被修改的假设,采用了类似于数据库的 commit-retry 的模式。当同步冲突出现的机会很少时,这种假设能带来较大的性能提升。

CAS的开销(CPU Cache Miss problem)

前面说过了,CAS(比较并交换)是CPU指令级的操作,只有一步原子操作,所以非常快。而且CAS避免了请求操作系统来裁定锁的问题,不用麻烦操作系统,直接在CPU内部就搞定了。但CAS就没有开销了吗?不!有cache miss的情况。这个问题比较复杂,首先需要了解CPU的硬件体系结构:

2014-02-19_11h35_45

上图可以看到一个8核CPU计算机系统,每个CPU有cache(CPU内部的高速缓存,寄存器),管芯内还带有一个互联模块,使管芯内的两个核可以互相通信。在图中央的系统互联模块可以让四个管芯相互通信,并且将管芯与主存连接起来。数据以“缓存线”为单位在系统中传输,“缓存线”对应于内存中一个 2 的幂大小的字节块,大小通常为 32 到 256 字节之间。当 CPU 从内存中读取一个变量到它的寄存器中时,必须首先将包含了该变量的缓存线读取到 CPU 高速缓存。同样地,CPU 将寄存器中的一个值存储到内存时,不仅必须将包含了该值的缓存线读到 CPU 高速缓存,还必须确保没有其他 CPU 拥有该缓存线的拷贝。

比如,如果 CPU0 在对一个变量执行“比较并交换”(CAS)操作,而该变量所在的缓存线在 CPU7 的高速缓存中,就会发生以下经过简化的事件序列:

  • CPU0 检查本地高速缓存,没有找到缓存线。
  • 请求被转发到 CPU0 和 CPU1 的互联模块,检查 CPU1 的本地高速缓存,没有找到缓存线。
  • 请求被转发到系统互联模块,检查其他三个管芯,得知缓存线被 CPU6和 CPU7 所在的管芯持有。
  • 请求被转发到 CPU6 和 CPU7 的互联模块,检查这两个 CPU 的高速缓存,在 CPU7 的高速缓存中找到缓存线。
  • CPU7 将缓存线发送给所属的互联模块,并且刷新自己高速缓存中的缓存线。
  • CPU6 和 CPU7 的互联模块将缓存线发送给系统互联模块。
  • 系统互联模块将缓存线发送给 CPU0 和 CPU1 的互联模块。
  • CPU0 和 CPU1 的互联模块将缓存线发送给 CPU0 的高速缓存。
  • CPU0 现在可以对高速缓存中的变量执行 CAS 操作了

以上是刷新不同CPU缓存的开销。最好情况下的 CAS 操作消耗大概 40 纳秒,超过 60 个时钟周期。这里的“最好情况”是指对某一个变量执行 CAS 操作的 CPU 正好是最后一个操作该变量的CPU,所以对应的缓存线已经在 CPU 的高速缓存中了,类似地,最好情况下的锁操作(一个“round trip 对”包括获取锁和随后的释放锁)消耗超过 60 纳秒,超过 100 个时钟周期。这里的“最好情况”意味着用于表示锁的数据结构已经在获取和释放锁的 CPU 所属的高速缓存中了。锁操作比 CAS 操作更加耗时,是因深入理解并行编程
为锁操作的数据结构中需要两个原子操作。缓存未命中消耗大概 140 纳秒,超过 200 个时钟周期。需要在存储新值时查询变量的旧值的 CAS 操作,消耗大概 300 纳秒,超过 500 个时钟周期。想想这个,在执行一次 CAS 操作的时间里,CPU 可以执行 500 条普通指令。这表明了细粒度锁的局限性。

以下是cache miss cas 和lock的性能对比:

2014-02-19_11h43_23

JVM对CAS的支持:AtomicInt, AtomicLong.incrementAndGet()

在JDK1.5之前,如果不编写明确的代码就无法执行CAS操作,在JDK1.5中引入了底层的支持,在int、long和对象的引用等类型上都公开了CAS的操作,并且JVM把它们编译为底层硬件提供的最有效的方法,在运行CAS的平台上,运行时把它们编译为相应的机器指令,如果处理器/CPU不支持CAS指令,那么JVM将使用自旋锁。因此,值得注意的是,CAS解决方案与平台/编译器紧密相关(比如x86架构下其对应的汇编指令是lock cmpxchg,如果想要64Bit的交换,则应使用lock cmpxchg8b。在.NET中我们可以使用Interlocked.CompareExchange函数)

在原子类变量中,如java.util.concurrent.atomic中的AtomicXXX,都使用了这些底层的JVM支持为数字类型的引用类型提供一种高效的CAS操作,而在java.util.concurrent中的大多数类在实现时都直接或间接的使用了这些原子变量类。

Java 1.6中AtomicLong.incrementAndGet()的实现源码为:

由此可见,AtomicLong.incrementAndGet的实现用了乐观锁技术,调用了sun.misc.Unsafe类库里面的 CAS算法,用CPU指令来实现无锁自增。所以,AtomicLong.incrementAndGet的自增比用synchronized的锁效率倍增。

public final int getAndIncrement() {         for (;;) {             int current = get();             int next = current + 1;             if (compareAndSet(current, next))                 return current;         } }    public final boolean compareAndSet(int expect, int update) {     return unsafe.compareAndSwapInt(this, valueOffset, expect, update); }


下面是测试代码:可以看到用AtomicLong.incrementAndGet的性能比用synchronized高出几倍。

2014-02-12_14h56_39

CAS的例子:非阻塞堆栈

下面是比非阻塞自增稍微复杂一点的CAS的例子:非阻塞堆栈/ConcurrentStackConcurrentStack 中的push()pop() 操作在结构上与NonblockingCounter 上相似,只是做的工作有些冒险,希望在 “提交” 工作的时候,底层假设没有失效。push() 方法观察当前最顶的节点,构建一个新节点放在堆栈上,然后,如果最顶端的节点在初始观察之后没有变化,那么就安装新节点。如果 CAS 失败,意味着另一个线程已经修改了堆栈,那么过程就会重新开始。

public class ConcurrentStack<E> {    AtomicReference<Node<E>> head = new AtomicReference<Node<E>>();    public void push(E item) {        Node<E> newHead = new Node<E>(item);        Node<E> oldHead;        do {            oldHead = head.get();            newHead.next = oldHead;        } while (!head.compareAndSet(oldHead, newHead));    }    public E pop() {        Node<E> oldHead;        Node<E> newHead;        do {            oldHead = head.get();            if (oldHead == null)                return null;            newHead = oldHead.next;        } while (!head.compareAndSet(oldHead,newHead));        return oldHead.item;    }    static class Node<E> {        final E item;        Node<E> next;        public Node(E item) { this.item = item; }    }}


在轻度到中度的争用情况下,非阻塞算法的性能会超越阻塞算法,因为 CAS 的多数时间都在第一次尝试时就成功,而发生争用时的开销也不涉及线程挂起和上下文切换,只多了几个循环迭代。没有争用的 CAS 要比没有争用的锁便宜得多(这句话肯定是真的,因为没有争用的锁涉及 CAS 加上额外的处理),而争用的 CAS 比争用的锁获取涉及更短的延迟。

在高度争用的情况下(即有多个线程不断争用一个内存位置的时候),基于锁的算法开始提供比非阻塞算法更好的吞吐率,因为当线程阻塞时,它就会停止争用,耐心地等候轮到自己,从而避免了进一步争用。但是,这么高的争用程度并不常见,因为多数时候,线程会把线程本地的计算与争用共享数据的操作分开,从而给其他线程使用共享数据的机会。

CAS的例子3:非阻塞链表

以上的示例(自增计数器和堆栈)都是非常简单的非阻塞算法,一旦掌握了在循环中使用 CAS,就可以容易地模仿它们。对于更复杂的数据结构,非阻塞算法要比这些简单示例复杂得多,因为修改链表、树或哈希表可能涉及对多个指针的更新。CAS 支持对单一指针的原子性条件更新,但是不支持两个以上的指针。所以,要构建一个非阻塞的链表、树或哈希表,需要找到一种方式,可以用 CAS 更新多个指针,同时不会让数据结构处于不一致的状态。

在链表的尾部插入元素,通常涉及对两个指针的更新:“尾” 指针总是指向列表中的最后一个元素,“下一个” 指针从过去的最后一个元素指向新插入的元素。因为需要更新两个指针,所以需要两个 CAS。在独立的 CAS 中更新两个指针带来了两个需要考虑的潜在问题:如果第一个 CAS 成功,而第二个 CAS 失败,会发生什么?如果其他线程在第一个和第二个 CAS 之间企图访问链表,会发生什么?

对于非复杂数据结构,构建非阻塞算法的 “技巧” 是确保数据结构总处于一致的状态(甚至包括在线程开始修改数据结构和它完成修改之间),还要确保其他线程不仅能够判断出第一个线程已经完成了更新还是处在更新的中途,还能够判断出如果第一个线程走向 AWOL,完成更新还需要什么操作。如果线程发现了处在更新中途的数据结构,它就可以 “帮助” 正在执行更新的线程完成更新,然后再进行自己的操作。当第一个线程回来试图完成自己的更新时,会发现不再需要了,返回即可,因为 CAS 会检测到帮助线程的干预(在这种情况下,是建设性的干预)。

这种 “帮助邻居” 的要求,对于让数据结构免受单个线程失败的影响,是必需的。如果线程发现数据结构正处在被其他线程更新的中途,然后就等候其他线程完成更新,那么如果其他线程在操作中途失败,这个线程就可能永远等候下去。即使不出现故障,这种方式也会提供糟糕的性能,因为新到达的线程必须放弃处理器,导致上下文切换,或者等到自己的时间片过期(而这更糟)。

public class LinkedQueue <E> {    private static class Node <E> {        final E item;        final AtomicReference<Node<E>> next;        Node(E item, Node<E> next) {            this.item = item;            this.next = new AtomicReference<Node<E>>(next);        }    }    private AtomicReference<Node<E>> head        = new AtomicReference<Node<E>>(new Node<E>(null, null));    private AtomicReference<Node<E>> tail = head;    public boolean put(E item) {        Node<E> newNode = new Node<E>(item, null);        while (true) {            Node<E> curTail = tail.get();            Node<E> residue = curTail.next.get();            if (curTail == tail.get()) {                if (residue == null) /* A */ {                    if (curTail.next.compareAndSet(null, newNode)) /* C */ {                        tail.compareAndSet(curTail, newNode) /* D */ ;                        return true;                    }                } else {                    tail.compareAndSet(curTail, residue) /* B */;                }            }        }    }}


具体算法相见IBM Developerworks

Java的ConcurrentHashMap的实现原理

Java5中的ConcurrentHashMap,线程安全,设计巧妙,用桶粒度的锁,避免了put和get中对整个map的锁定,尤其在get中,只对一个HashEntry做锁定操作,性能提升是显而易见的。

8aea11a8-4184-3f1f-aba7-169aa5e0797a

具体实现中使用了锁分离机制,在这个帖子中有非常详细的讨论。这里有关于Java内存模型结合ConcurrentHashMap的分析。以下是JDK6的ConcurrentHashMap的源码:

/* * %W% %E% * * Copyright (c) 2007, Oracle and/or its affiliates. All rights reserved. * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. */package java.util.concurrent;import java.util.concurrent.locks.*;import java.util.*;import java.io.Serializable;import java.io.IOException;import java.io.ObjectInputStream;import java.io.ObjectOutputStream;/** * A hash table supporting full concurrency of retrievals and * adjustable expected concurrency for updates. This class obeys the * same functional specification as {@link java.util.Hashtable}, and * includes versions of methods corresponding to each method of * <tt>Hashtable</tt>. However, even though all operations are * thread-safe, retrieval operations do <em>not</em> entail locking, * and there is <em>not</em> any support for locking the entire table * in a way that prevents all access.  This class is fully * interoperable with <tt>Hashtable</tt> in programs that rely on its * thread safety but not on its synchronization details. * * <p> Retrieval operations (including <tt>get</tt>) generally do not * block, so may overlap with update operations (including * <tt>put</tt> and <tt>remove</tt>). Retrievals reflect the results * of the most recently <em>completed</em> update operations holding * upon their onset.  For aggregate operations such as <tt>putAll</tt> * and <tt>clear</tt>, concurrent retrievals may reflect insertion or * removal of only some entries.  Similarly, Iterators and * Enumerations return elements reflecting the state of the hash table * at some point at or since the creation of the iterator/enumeration. * They do <em>not</em> throw {@link ConcurrentModificationException}. * However, iterators are designed to be used by only one thread at a time. * * <p> The allowed concurrency among update operations is guided by * the optional <tt>concurrencyLevel</tt> constructor argument * (default <tt>16</tt>), which is used as a hint for internal sizing.  The * table is internally partitioned to try to permit the indicated * number of concurrent updates without contention. Because placement * in hash tables is essentially random, the actual concurrency will * vary.  Ideally, you should choose a value to accommodate as many * threads as will ever concurrently modify the table. Using a * significantly higher value than you need can waste space and time, * and a significantly lower value can lead to thread contention. But * overestimates and underestimates within an order of magnitude do * not usually have much noticeable impact. A value of one is * appropriate when it is known that only one thread will modify and * all others will only read. Also, resizing this or any other kind of * hash table is a relatively slow operation, so, when possible, it is * a good idea to provide estimates of expected table sizes in * constructors. * * <p>This class and its views and iterators implement all of the * <em>optional</em> methods of the {@link Map} and {@link Iterator} * interfaces. * * <p> Like {@link Hashtable} but unlike {@link HashMap}, this class * does <em>not</em> allow <tt>null</tt> to be used as a key or value. * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @since 1.5 * @author Doug Lea * @param <K> the type of keys maintained by this map * @param <V> the type of mapped values */public class ConcurrentHashMap<K, V> extends AbstractMap<K, V>        implements ConcurrentMap<K, V>, Serializable {    private static final long serialVersionUID = 7249069246763182397L;    /*     * The basic strategy is to subdivide the table among Segments,     * each of which itself is a concurrently readable hash table.     */    /* ---------------- Constants -------------- */    /**     * The default initial capacity for this table,     * used when not otherwise specified in a constructor.     */    static final int DEFAULT_INITIAL_CAPACITY = 16;    /**     * The default load factor for this table, used when not     * otherwise specified in a constructor.     */    static final float DEFAULT_LOAD_FACTOR = 0.75f;    /**     * The default concurrency level for this table, used when not     * otherwise specified in a constructor.     */    static final int DEFAULT_CONCURRENCY_LEVEL = 16;    /**     * The maximum capacity, used if a higher value is implicitly     * specified by either of the constructors with arguments.  MUST     * be a power of two <= 1<<30 to ensure that entries are indexable     * using ints.     */    static final int MAXIMUM_CAPACITY = 1 << 30;    /**     * The maximum number of segments to allow; used to bound     * constructor arguments.     */    static final int MAX_SEGMENTS = 1 << 16; // slightly conservative    /**     * Number of unsynchronized retries in size and containsValue     * methods before resorting to locking. This is used to avoid     * unbounded retries if tables undergo continuous modification     * which would make it impossible to obtain an accurate result.     */    static final int RETRIES_BEFORE_LOCK = 2;    /* ---------------- Fields -------------- */    /**     * Mask value for indexing into segments. The upper bits of a     * key's hash code are used to choose the segment.     */    final int segmentMask;    /**     * Shift value for indexing within segments.     */    final int segmentShift;    /**     * The segments, each of which is a specialized hash table     */    final Segment<K,V>[] segments;    transient Set<K> keySet;    transient Set<Map.Entry<K,V>> entrySet;    transient Collection<V> values;    /* ---------------- Small Utilities -------------- */    /**     * Applies a supplemental hash function to a given hashCode, which     * defends against poor quality hash functions.  This is critical     * because ConcurrentHashMap uses power-of-two length hash tables,     * that otherwise encounter collisions for hashCodes that do not     * differ in lower or upper bits.     */    private static int hash(int h) {        // Spread bits to regularize both segment and index locations,        // using variant of single-word Wang/Jenkins hash.        h += (h <<  15) ^ 0xffffcd7d;        h ^= (h >>> 10);        h += (h <<   3);        h ^= (h >>>  6);        h += (h <<   2) + (h << 14);        return h ^ (h >>> 16);    }    /**     * Returns the segment that should be used for key with given hash     * @param hash the hash code for the key     * @return the segment     */    final Segment<K,V> segmentFor(int hash) {        return segments[(hash >>> segmentShift) & segmentMask];    }    /* ---------------- Inner Classes -------------- */    /**     * ConcurrentHashMap list entry. Note that this is never exported     * out as a user-visible Map.Entry.     *     * Because the value field is volatile, not final, it is legal wrt     * the Java Memory Model for an unsynchronized reader to see null     * instead of initial value when read via a data race.  Although a     * reordering leading to this is not likely to ever actually     * occur, the Segment.readValueUnderLock method is used as a     * backup in case a null (pre-initialized) value is ever seen in     * an unsynchronized access method.     */    static final class HashEntry<K,V> {        final K key;        final int hash;        volatile V value;        final HashEntry<K,V> next;        HashEntry(K key, int hash, HashEntry<K,V> next, V value) {            this.key = key;            this.hash = hash;            this.next = next;            this.value = value;        }@SuppressWarnings("unchecked")static final <K,V> HashEntry<K,V>[] newArray(int i) {    return new HashEntry[i];}    }    /**     * Segments are specialized versions of hash tables.  This     * subclasses from ReentrantLock opportunistically, just to     * simplify some locking and avoid separate construction.     */    static final class Segment<K,V> extends ReentrantLock implements Serializable {        /*         * Segments maintain a table of entry lists that are ALWAYS         * kept in a consistent state, so can be read without locking.         * Next fields of nodes are immutable (final).  All list         * additions are performed at the front of each bin. This         * makes it easy to check changes, and also fast to traverse.         * When nodes would otherwise be changed, new nodes are         * created to replace them. This works well for hash tables         * since the bin lists tend to be short. (The average length         * is less than two for the default load factor threshold.)         *         * Read operations can thus proceed without locking, but rely         * on selected uses of volatiles to ensure that completed         * write operations performed by other threads are         * noticed. For most purposes, the "count" field, tracking the         * number of elements, serves as that volatile variable         * ensuring visibility.  This is convenient because this field         * needs to be read in many read operations anyway:         *         *   - All (unsynchronized) read operations must first read the         *     "count" field, and should not look at table entries if         *     it is 0.         *         *   - All (synchronized) write operations should write to         *     the "count" field after structurally changing any bin.         *     The operations must not take any action that could even         *     momentarily cause a concurrent read operation to see         *     inconsistent data. This is made easier by the nature of         *     the read operations in Map. For example, no operation         *     can reveal that the table has grown but the threshold         *     has not yet been updated, so there are no atomicity         *     requirements for this with respect to reads.         *         * As a guide, all critical volatile reads and writes to the         * count field are marked in code comments.         */        private static final long serialVersionUID = 2249069246763182397L;        /**         * The number of elements in this segment's region.         */        transient volatile int count;        /**         * Number of updates that alter the size of the table. This is         * used during bulk-read methods to make sure they see a         * consistent snapshot: If modCounts change during a traversal         * of segments computing size or checking containsValue, then         * we might have an inconsistent view of state so (usually)         * must retry.         */        transient int modCount;        /**         * The table is rehashed when its size exceeds this threshold.         * (The value of this field is always <tt>(int)(capacity *         * loadFactor)</tt>.)         */        transient int threshold;        /**         * The per-segment table.         */        transient volatile HashEntry<K,V>[] table;        /**         * The load factor for the hash table.  Even though this value         * is same for all segments, it is replicated to avoid needing         * links to outer object.         * @serial         */        final float loadFactor;        Segment(int initialCapacity, float lf) {            loadFactor = lf;            setTable(HashEntry.<K,V>newArray(initialCapacity));        }@SuppressWarnings("unchecked")        static final <K,V> Segment<K,V>[] newArray(int i) {    return new Segment[i];        }        /**         * Sets table to new HashEntry array.         * Call only while holding lock or in constructor.         */        void setTable(HashEntry<K,V>[] newTable) {            threshold = (int)(newTable.length * loadFactor);            table = newTable;        }        /**         * Returns properly casted first entry of bin for given hash.         */        HashEntry<K,V> getFirst(int hash) {            HashEntry<K,V>[] tab = table;            return tab[hash & (tab.length - 1)];        }        /**         * Reads value field of an entry under lock. Called if value         * field ever appears to be null. This is possible only if a         * compiler happens to reorder a HashEntry initialization with         * its table assignment, which is legal under memory model         * but is not known to ever occur.         */        V readValueUnderLock(HashEntry<K,V> e) {            lock();            try {                return e.value;            } finally {                unlock();            }        }        /* Specialized implementations of map methods */        V get(Object key, int hash) {            if (count != 0) { // read-volatile                HashEntry<K,V> e = getFirst(hash);                while (e != null) {                    if (e.hash == hash && key.equals(e.key)) {                        V v = e.value;                        if (v != null)                            return v;                        return readValueUnderLock(e); // recheck                    }                    e = e.next;                }            }            return null;        }        boolean containsKey(Object key, int hash) {            if (count != 0) { // read-volatile                HashEntry<K,V> e = getFirst(hash);                while (e != null) {                    if (e.hash == hash && key.equals(e.key))                        return true;                    e = e.next;                }            }            return false;        }        boolean containsValue(Object value) {            if (count != 0) { // read-volatile                HashEntry<K,V>[] tab = table;                int len = tab.length;                for (int i = 0 ; i < len; i++) {                    for (HashEntry<K,V> e = tab[i]; e != null; e = e.next) {                        V v = e.value;                        if (v == null) // recheck                            v = readValueUnderLock(e);                        if (value.equals(v))                            return true;                    }                }            }            return false;        }        boolean replace(K key, int hash, V oldValue, V newValue) {            lock();            try {                HashEntry<K,V> e = getFirst(hash);                while (e != null && (e.hash != hash || !key.equals(e.key)))                    e = e.next;                boolean replaced = false;                if (e != null && oldValue.equals(e.value)) {                    replaced = true;                    e.value = newValue;                }                return replaced;            } finally {                unlock();            }        }        V replace(K key, int hash, V newValue) {            lock();            try {                HashEntry<K,V> e = getFirst(hash);                while (e != null && (e.hash != hash || !key.equals(e.key)))                    e = e.next;                V oldValue = null;                if (e != null) {                    oldValue = e.value;                    e.value = newValue;                }                return oldValue;            } finally {                unlock();            }        }        V put(K key, int hash, V value, boolean onlyIfAbsent) {            lock();            try {                int c = count;                if (c++ > threshold) // ensure capacity                    rehash();                HashEntry<K,V>[] tab = table;                int index = hash & (tab.length - 1);                HashEntry<K,V> first = tab[index];                HashEntry<K,V> e = first;                while (e != null && (e.hash != hash || !key.equals(e.key)))                    e = e.next;                V oldValue;                if (e != null) {                    oldValue = e.value;                    if (!onlyIfAbsent)                        e.value = value;                }                else {                    oldValue = null;                    ++modCount;                    tab[index] = new HashEntry<K,V>(key, hash, first, value);                    count = c; // write-volatile                }                return oldValue;            } finally {                unlock();            }        }        void rehash() {            HashEntry<K,V>[] oldTable = table;            int oldCapacity = oldTable.length;            if (oldCapacity >= MAXIMUM_CAPACITY)                return;            /*             * Reclassify nodes in each list to new Map.  Because we are             * using power-of-two expansion, the elements from each bin             * must either stay at same index, or move with a power of two             * offset. We eliminate unnecessary node creation by catching             * cases where old nodes can be reused because their next             * fields won't change. Statistically, at the default             * threshold, only about one-sixth of them need cloning when             * a table doubles. The nodes they replace will be garbage             * collectable as soon as they are no longer referenced by any             * reader thread that may be in the midst of traversing table             * right now.             */            HashEntry<K,V>[] newTable = HashEntry.newArray(oldCapacity<<1);            threshold = (int)(newTable.length * loadFactor);            int sizeMask = newTable.length - 1;            for (int i = 0; i < oldCapacity ; i++) {                // We need to guarantee that any existing reads of old Map can                //  proceed. So we cannot yet null out each bin.                HashEntry<K,V> e = oldTable[i];                if (e != null) {                    HashEntry<K,V> next = e.next;                    int idx = e.hash & sizeMask;                    //  Single node on list                    if (next == null)                        newTable[idx] = e;                    else {                        // Reuse trailing consecutive sequence at same slot                        HashEntry<K,V> lastRun = e;                        int lastIdx = idx;                        for (HashEntry<K,V> last = next;                             last != null;                             last = last.next) {                            int k = last.hash & sizeMask;                            if (k != lastIdx) {                                lastIdx = k;                                lastRun = last;                            }                        }                        newTable[lastIdx] = lastRun;                        // Clone all remaining nodes                        for (HashEntry<K,V> p = e; p != lastRun; p = p.next) {                            int k = p.hash & sizeMask;                            HashEntry<K,V> n = newTable[k];                            newTable[k] = new HashEntry<K,V>(p.key, p.hash,                                                             n, p.value);                        }                    }                }            }            table = newTable;        }        /**         * Remove; match on key only if value null, else match both.         */        V remove(Object key, int hash, Object value) {            lock();            try {                int c = count - 1;                HashEntry<K,V>[] tab = table;                int index = hash & (tab.length - 1);                HashEntry<K,V> first = tab[index];                HashEntry<K,V> e = first;                while (e != null && (e.hash != hash || !key.equals(e.key)))                    e = e.next;                V oldValue = null;                if (e != null) {                    V v = e.value;                    if (value == null || value.equals(v)) {                        oldValue = v;                        // All entries following removed node can stay                        // in list, but all preceding ones need to be                        // cloned.                        ++modCount;                        HashEntry<K,V> newFirst = e.next;                        for (HashEntry<K,V> p = first; p != e; p = p.next)                            newFirst = new HashEntry<K,V>(p.key, p.hash,                                                          newFirst, p.value);                        tab[index] = newFirst;                        count = c; // write-volatile                    }                }                return oldValue;            } finally {                unlock();            }        }        void clear() {            if (count != 0) {                lock();                try {                    HashEntry<K,V>[] tab = table;                    for (int i = 0; i < tab.length ; i++)                        tab[i] = null;                    ++modCount;                    count = 0; // write-volatile                } finally {                    unlock();                }            }        }    }    /* ---------------- Public operations -------------- */    /**     * Creates a new, empty map with the specified initial     * capacity, load factor and concurrency level.     *     * @param initialCapacity the initial capacity. The implementation     * performs internal sizing to accommodate this many elements.     * @param loadFactor  the load factor threshold, used to control resizing.     * Resizing may be performed when the average number of elements per     * bin exceeds this threshold.     * @param concurrencyLevel the estimated number of concurrently     * updating threads. The implementation performs internal sizing     * to try to accommodate this many threads.     * @throws IllegalArgumentException if the initial capacity is     * negative or the load factor or concurrencyLevel are     * nonpositive.     */    public ConcurrentHashMap(int initialCapacity,                             float loadFactor, int concurrencyLevel) {        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)            throw new IllegalArgumentException();        if (concurrencyLevel > MAX_SEGMENTS)            concurrencyLevel = MAX_SEGMENTS;        // Find power-of-two sizes best matching arguments        int sshift = 0;        int ssize = 1;        while (ssize < concurrencyLevel) {            ++sshift;            ssize <<= 1;        }        segmentShift = 32 - sshift;        segmentMask = ssize - 1;        this.segments = Segment.newArray(ssize);        if (initialCapacity > MAXIMUM_CAPACITY)            initialCapacity = MAXIMUM_CAPACITY;        int c = initialCapacity / ssize;        if (c * ssize < initialCapacity)            ++c;        int cap = 1;        while (cap < c)            cap <<= 1;        for (int i = 0; i < this.segments.length; ++i)            this.segments[i] = new Segment<K,V>(cap, loadFactor);    }    /**     * Creates a new, empty map with the specified initial capacity     * and load factor and with the default concurrencyLevel (16).     *     * @param initialCapacity The implementation performs internal     * sizing to accommodate this many elements.     * @param loadFactor  the load factor threshold, used to control resizing.     * Resizing may be performed when the average number of elements per     * bin exceeds this threshold.     * @throws IllegalArgumentException if the initial capacity of     * elements is negative or the load factor is nonpositive     *     * @since 1.6     */    public ConcurrentHashMap(int initialCapacity, float loadFactor) {        this(initialCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL);    }    /**     * Creates a new, empty map with the specified initial capacity,     * and with default load factor (0.75) and concurrencyLevel (16).     *     * @param initialCapacity the initial capacity. The implementation     * performs internal sizing to accommodate this many elements.     * @throws IllegalArgumentException if the initial capacity of     * elements is negative.     */    public ConcurrentHashMap(int initialCapacity) {        this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);    }    /**     * Creates a new, empty map with a default initial capacity (16),     * load factor (0.75) and concurrencyLevel (16).     */    public ConcurrentHashMap() {        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);    }    /**     * Creates a new map with the same mappings as the given map.     * The map is created with a capacity of 1.5 times the number     * of mappings in the given map or 16 (whichever is greater),     * and a default load factor (0.75) and concurrencyLevel (16).     *     * @param m the map     */    public ConcurrentHashMap(Map<? extends K, ? extends V> m) {        this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,                      DEFAULT_INITIAL_CAPACITY),             DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);        putAll(m);    }    /**     * Returns <tt>true</tt> if this map contains no key-value mappings.     *     * @return <tt>true</tt> if this map contains no key-value mappings     */    public boolean isEmpty() {        final Segment<K,V>[] segments = this.segments;        /*         * We keep track of per-segment modCounts to avoid ABA         * problems in which an element in one segment was added and         * in another removed during traversal, in which case the         * table was never actually empty at any point. Note the         * similar use of modCounts in the size() and containsValue()         * methods, which are the only other methods also susceptible         * to ABA problems.         */        int[] mc = new int[segments.length];        int mcsum = 0;        for (int i = 0; i < segments.length; ++i) {            if (segments[i].count != 0)                return false;            else                mcsum += mc[i] = segments[i].modCount;        }        // If mcsum happens to be zero, then we know we got a snapshot        // before any modifications at all were made.  This is        // probably common enough to bother tracking.        if (mcsum != 0) {            for (int i = 0; i < segments.length; ++i) {                if (segments[i].count != 0 ||                    mc[i] != segments[i].modCount)                    return false;            }        }        return true;    }    /**     * Returns the number of key-value mappings in this map.  If the     * map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns     * <tt>Integer.MAX_VALUE</tt>.     *     * @return the number of key-value mappings in this map     */    public int size() {        final Segment<K,V>[] segments = this.segments;        long sum = 0;        long check = 0;        int[] mc = new int[segments.length];        // Try a few times to get accurate count. On failure due to        // continuous async changes in table, resort to locking.        for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {            check = 0;            sum = 0;            int mcsum = 0;            for (int i = 0; i < segments.length; ++i) {                sum += segments[i].count;                mcsum += mc[i] = segments[i].modCount;            }            if (mcsum != 0) {                for (int i = 0; i < segments.length; ++i) {                    check += segments[i].count;                    if (mc[i] != segments[i].modCount) {                        check = -1; // force retry                        break;                    }                }            }            if (check == sum)                break;        }        if (check != sum) { // Resort to locking all segments            sum = 0;            for (int i = 0; i < segments.length; ++i)                segments[i].lock();            for (int i = 0; i < segments.length; ++i)                sum += segments[i].count;            for (int i = 0; i < segments.length; ++i)                segments[i].unlock();        }        if (sum > Integer.MAX_VALUE)            return Integer.MAX_VALUE;        else            return (int)sum;    }    /**     * Returns the value to which the specified key is mapped,     * or {@code null} if this map contains no mapping for the key.     *     * <p>More formally, if this map contains a mapping from a key     * {@code k} to a value {@code v} such that {@code key.equals(k)},     * then this method returns {@code v}; otherwise it returns     * {@code null}.  (There can be at most one such mapping.)     *     * @throws NullPointerException if the specified key is null     */    public V get(Object key) {        int hash = hash(key.hashCode());        return segmentFor(hash).get(key, hash);    }    /**     * Tests if the specified object is a key in this table.     *     * @param  key   possible key     * @return <tt>true</tt> if and only if the specified object     *         is a key in this table, as determined by the     *         <tt>equals</tt> method; <tt>false</tt> otherwise.     * @throws NullPointerException if the specified key is null     */    public boolean containsKey(Object key) {        int hash = hash(key.hashCode());        return segmentFor(hash).containsKey(key, hash);    }    /**     * Returns <tt>true</tt> if this map maps one or more keys to the     * specified value. Note: This method requires a full internal     * traversal of the hash table, and so is much slower than     * method <tt>containsKey</tt>.     *     * @param value value whose presence in this map is to be tested     * @return <tt>true</tt> if this map maps one or more keys to the     *         specified value     * @throws NullPointerException if the specified value is null     */    public boolean containsValue(Object value) {        if (value == null)            throw new NullPointerException();        // See explanation of modCount use above        final Segment<K,V>[] segments = this.segments;        int[] mc = new int[segments.length];        // Try a few times without locking        for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {            int sum = 0;            int mcsum = 0;            for (int i = 0; i < segments.length; ++i) {                int c = segments[i].count;                mcsum += mc[i] = segments[i].modCount;                if (segments[i].containsValue(value))                    return true;            }            boolean cleanSweep = true;            if (mcsum != 0) {                for (int i = 0; i < segments.length; ++i) {                    int c = segments[i].count;                    if (mc[i] != segments[i].modCount) {                        cleanSweep = false;                        break;                    }                }            }            if (cleanSweep)                return false;        }        // Resort to locking all segments        for (int i = 0; i < segments.length; ++i)            segments[i].lock();        boolean found = false;        try {            for (int i = 0; i < segments.length; ++i) {                if (segments[i].containsValue(value)) {                    found = true;                    break;                }            }        } finally {            for (int i = 0; i < segments.length; ++i)                segments[i].unlock();        }        return found;    }    /**     * Legacy method testing if some key maps into the specified value     * in this table.  This method is identical in functionality to     * {@link #containsValue}, and exists solely to ensure     * full compatibility with class {@link java.util.Hashtable},     * which supported this method prior to introduction of the     * Java Collections framework.     * @param  value a value to search for     * @return <tt>true</tt> if and only if some key maps to the     *         <tt>value</tt> argument in this table as     *         determined by the <tt>equals</tt> method;     *         <tt>false</tt> otherwise     * @throws NullPointerException if the specified value is null     */    public boolean contains(Object value) {        return containsValue(value);    }    /**     * Maps the specified key to the specified value in this table.     * Neither the key nor the value can be null.     *     * <p> The value can be retrieved by calling the <tt>get</tt> method     * with a key that is equal to the original key.     *     * @param key key with which the specified value is to be associated     * @param value value to be associated with the specified key     * @return the previous value associated with <tt>key</tt>, or     *         <tt>null</tt> if there was no mapping for <tt>key</tt>     * @throws NullPointerException if the specified key or value is null     */    public V put(K key, V value) {        if (value == null)            throw new NullPointerException();        int hash = hash(key.hashCode());        return segmentFor(hash).put(key, hash, value, false);    }    /**     * {@inheritDoc}     *     * @return the previous value associated with the specified key,     *         or <tt>null</tt> if there was no mapping for the key     * @throws NullPointerException if the specified key or value is null     */    public V putIfAbsent(K key, V value) {        if (value == null)            throw new NullPointerException();        int hash = hash(key.hashCode());        return segmentFor(hash).put(key, hash, value, true);    }    /**     * Copies all of the mappings from the specified map to this one.     * These mappings replace any mappings that this map had for any of the     * keys currently in the specified map.     *     * @param m mappings to be stored in this map     */    public void putAll(Map<? extends K, ? extends V> m) {        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())            put(e.getKey(), e.getValue());    }    /**     * Removes the key (and its corresponding value) from this map.     * This method does nothing if the key is not in the map.     *     * @param  key the key that needs to be removed     * @return the previous value associated with <tt>key</tt>, or     *         <tt>null</tt> if there was no mapping for <tt>key</tt>     * @throws NullPointerException if the specified key is null     */    public V remove(Object key) {int hash = hash(key.hashCode());        return segmentFor(hash).remove(key, hash, null);    }    /**     * {@inheritDoc}     *     * @throws NullPointerException if the specified key is null     */    public boolean remove(Object key, Object value) {        int hash = hash(key.hashCode());        if (value == null)            return false;        return segmentFor(hash).remove(key, hash, value) != null;    }    /**     * {@inheritDoc}     *     * @throws NullPointerException if any of the arguments are null     */    public boolean replace(K key, V oldValue, V newValue) {        if (oldValue == null || newValue == null)            throw new NullPointerException();        int hash = hash(key.hashCode());        return segmentFor(hash).replace(key, hash, oldValue, newValue);    }    /**     * {@inheritDoc}     *     * @return the previous value associated with the specified key,     *         or <tt>null</tt> if there was no mapping for the key     * @throws NullPointerException if the specified key or value is null     */    public V replace(K key, V value) {        if (value == null)            throw new NullPointerException();        int hash = hash(key.hashCode());        return segmentFor(hash).replace(key, hash, value);    }    /**     * Removes all of the mappings from this map.     */    public void clear() {        for (int i = 0; i < segments.length; ++i)            segments[i].clear();    }    /**     * Returns a {@link Set} view of the keys contained in this map.     * The set is backed by the map, so changes to the map are     * reflected in the set, and vice-versa.  The set supports element     * removal, which removes the corresponding mapping from this map,     * via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,     * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>     * operations.  It does not support the <tt>add</tt> or     * <tt>addAll</tt> operations.     *     * <p>The view's <tt>iterator</tt> is a "weakly consistent" iterator     * that will never throw {@link ConcurrentModificationException},     * and guarantees to traverse elements as they existed upon     * construction of the iterator, and may (but is not guaranteed to)     * reflect any modifications subsequent to construction.     */    public Set<K> keySet() {        Set<K> ks = keySet;        return (ks != null) ? ks : (keySet = new KeySet());    }    /**     * Returns a {@link Collection} view of the values contained in this map.     * The collection is backed by the map, so changes to the map are     * reflected in the collection, and vice-versa.  The collection     * supports element removal, which removes the corresponding     * mapping from this map, via the <tt>Iterator.remove</tt>,     * <tt>Collection.remove</tt>, <tt>removeAll</tt>,     * <tt>retainAll</tt>, and <tt>clear</tt> operations.  It does not     * support the <tt>add</tt> or <tt>addAll</tt> operations.     *     * <p>The view's <tt>iterator</tt> is a "weakly consistent" iterator     * that will never throw {@link ConcurrentModificationException},     * and guarantees to traverse elements as they existed upon     * construction of the iterator, and may (but is not guaranteed to)     * reflect any modifications subsequent to construction.     */    public Collection<V> values() {        Collection<V> vs = values;        return (vs != null) ? vs : (values = new Values());    }    /**     * Returns a {@link Set} view of the mappings contained in this map.     * The set is backed by the map, so changes to the map are     * reflected in the set, and vice-versa.  The set supports element     * removal, which removes the corresponding mapping from the map,     * via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,     * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>     * operations.  It does not support the <tt>add</tt> or     * <tt>addAll</tt> operations.     *     * <p>The view's <tt>iterator</tt> is a "weakly consistent" iterator     * that will never throw {@link ConcurrentModificationException},     * and guarantees to traverse elements as they existed upon     * construction of the iterator, and may (but is not guaranteed to)     * reflect any modifications subsequent to construction.     */    public Set<Map.Entry<K,V>> entrySet() {        Set<Map.Entry<K,V>> es = entrySet;        return (es != null) ? es : (entrySet = new EntrySet());    }    /**     * Returns an enumeration of the keys in this table.     *     * @return an enumeration of the keys in this table     * @see #keySet()     */    public Enumeration<K> keys() {        return new KeyIterator();    }    /**     * Returns an enumeration of the values in this table.     *     * @return an enumeration of the values in this table     * @see #values()     */    public Enumeration<V> elements() {        return new ValueIterator();    }    /* ---------------- Iterator Support -------------- */    abstract class HashIterator {        int nextSegmentIndex;        int nextTableIndex;        HashEntry<K,V>[] currentTable;        HashEntry<K, V> nextEntry;        HashEntry<K, V> lastReturned;        HashIterator() {            nextSegmentIndex = segments.length - 1;            nextTableIndex = -1;            advance();        }        public boolean hasMoreElements() { return hasNext(); }        final void advance() {            if (nextEntry != null && (nextEntry = nextEntry.next) != null)                return;            while (nextTableIndex >= 0) {                if ( (nextEntry = currentTable[nextTableIndex--]) != null)                    return;            }            while (nextSegmentIndex >= 0) {                Segment<K,V> seg = segments[nextSegmentIndex--];                if (seg.count != 0) {                    currentTable = seg.table;                    for (int j = currentTable.length - 1; j >= 0; --j) {                        if ( (nextEntry = currentTable[j]) != null) {                            nextTableIndex = j - 1;                            return;                        }                    }                }            }        }        public boolean hasNext() { return nextEntry != null; }        HashEntry<K,V> nextEntry() {            if (nextEntry == null)                throw new NoSuchElementException();            lastReturned = nextEntry;            advance();            return lastReturned;        }        public void remove() {            if (lastReturned == null)                throw new IllegalStateException();            ConcurrentHashMap.this.remove(lastReturned.key);            lastReturned = null;        }    }    final class KeyIteratorextends HashIteratorimplements Iterator<K>, Enumeration<K>    {        public K next()        { return super.nextEntry().key; }        public K nextElement() { return super.nextEntry().key; }    }    final class ValueIteratorextends HashIteratorimplements Iterator<V>, Enumeration<V>    {        public V next()        { return super.nextEntry().value; }        public V nextElement() { return super.nextEntry().value; }    }    /**     * Custom Entry class used by EntryIterator.next(), that relays     * setValue changes to the underlying map.     */    final class WriteThroughEntryextends AbstractMap.SimpleEntry<K,V>    {        WriteThroughEntry(K k, V v) {            super(k,v);        }        /**         * Set our entry's value and write through to the map. The         * value to return is somewhat arbitrary here. Since a         * WriteThroughEntry does not necessarily track asynchronous         * changes, the most recent "previous" value could be         * different from what we return (or could even have been         * removed in which case the put will re-establish). We do not         * and cannot guarantee more.         */public V setValue(V value) {            if (value == null) throw new NullPointerException();            V v = super.setValue(value);            ConcurrentHashMap.this.put(getKey(), value);            return v;        }    }    final class EntryIteratorextends HashIteratorimplements Iterator<Entry<K,V>>    {        public Map.Entry<K,V> next() {            HashEntry<K,V> e = super.nextEntry();            return new WriteThroughEntry(e.key, e.value);        }    }    final class KeySet extends AbstractSet<K> {        public Iterator<K> iterator() {            return new KeyIterator();        }        public int size() {            return ConcurrentHashMap.this.size();        }        public boolean contains(Object o) {            return ConcurrentHashMap.this.containsKey(o);        }        public boolean remove(Object o) {            return ConcurrentHashMap.this.remove(o) != null;        }        public void clear() {            ConcurrentHashMap.this.clear();        }    }    final class Values extends AbstractCollection<V> {        public Iterator<V> iterator() {            return new ValueIterator();        }        public int size() {            return ConcurrentHashMap.this.size();        }        public boolean contains(Object o) {            return ConcurrentHashMap.this.containsValue(o);        }        public void clear() {            ConcurrentHashMap.this.clear();        }    }    final class EntrySet extends AbstractSet<Map.Entry<K,V>> {        public Iterator<Map.Entry<K,V>> iterator() {            return new EntryIterator();        }        public boolean contains(Object o) {            if (!(o instanceof Map.Entry))                return false;            Map.Entry<?,?> e = (Map.Entry<?,?>)o;            V v = ConcurrentHashMap.this.get(e.getKey());            return v != null && v.equals(e.getValue());        }        public boolean remove(Object o) {            if (!(o instanceof Map.Entry))                return false;            Map.Entry<?,?> e = (Map.Entry<?,?>)o;            return ConcurrentHashMap.this.remove(e.getKey(), e.getValue());        }        public int size() {            return ConcurrentHashMap.this.size();        }        public void clear() {            ConcurrentHashMap.this.clear();        }    }    /* ---------------- Serialization Support -------------- */    /**     * Save the state of the <tt>ConcurrentHashMap</tt> instance to a     * stream (i.e., serialize it).     * @param s the stream     * @serialData     * the key (Object) and value (Object)     * for each key-value mapping, followed by a null pair.     * The key-value mappings are emitted in no particular order.     */    private void writeObject(java.io.ObjectOutputStream s) throws IOException  {        s.defaultWriteObject();        for (int k = 0; k < segments.length; ++k) {            Segment<K,V> seg = segments[k];            seg.lock();            try {                HashEntry<K,V>[] tab = seg.table;                for (int i = 0; i < tab.length; ++i) {                    for (HashEntry<K,V> e = tab[i]; e != null; e = e.next) {                        s.writeObject(e.key);                        s.writeObject(e.value);                    }                }            } finally {                seg.unlock();            }        }        s.writeObject(null);        s.writeObject(null);    }    /**     * Reconstitute the <tt>ConcurrentHashMap</tt> instance from a     * stream (i.e., deserialize it).     * @param s the stream     */    private void readObject(java.io.ObjectInputStream s)        throws IOException, ClassNotFoundException  {        s.defaultReadObject();        // Initialize each segment to be minimally sized, and let grow.        for (int i = 0; i < segments.length; ++i) {            segments[i].setTable(new HashEntry[1]);        }        // Read the keys and values, and put the mappings in the table        for (;;) {            K key = (K) s.readObject();            V value = (V) s.readObject();            if (key == null)                break;            put(key, value);        }    }}


Java的ConcurrentLinkedQueue实现方法

ConcurrentLinkedQueue也是同样使用了CAS指令,但其性能并不高因为太多CAS操作。其源码如下:

/* * %W% %E% * * Copyright (c) 2006, Oracle and/or its affiliates. All rights reserved. * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. */package java.util.concurrent;import java.util.AbstractQueue;import java.util.ArrayList;import java.util.Collection;import java.util.Iterator;import java.util.NoSuchElementException;import java.util.Queue; /** * An unbounded thread-safe {@linkplain Queue queue} based on linked nodes. * This queue orders elements FIFO (first-in-first-out). * The <em>head</em> of the queue is that element that has been on the * queue the longest time. * The <em>tail</em> of the queue is that element that has been on the * queue the shortest time. New elements * are inserted at the tail of the queue, and the queue retrieval * operations obtain elements at the head of the queue. * A {@code ConcurrentLinkedQueue} is an appropriate choice when * many threads will share access to a common collection. * This queue does not permit {@code null} elements. * * <p>This implementation employs an efficient "wait-free" * algorithm based on one described in <a * href="http://www.cs.rochester.edu/u/michael/PODC96.html"> Simple, * Fast, and Practical Non-Blocking and Blocking Concurrent Queue * Algorithms</a> by Maged M. Michael and Michael L. Scott. * * <p>Beware that, unlike in most collections, the {@code size} method * is <em>NOT</em> a constant-time operation. Because of the * asynchronous nature of these queues, determining the current number * of elements requires a traversal of the elements. * * <p>This class and its iterator implement all of the * <em>optional</em> methods of the {@link Collection} and {@link * Iterator} interfaces. * * <p>Memory consistency effects: As with other concurrent * collections, actions in a thread prior to placing an object into a * {@code ConcurrentLinkedQueue} * <a href="package-summary.html#MemoryVisibility"><i>happen-before</i></a> * actions subsequent to the access or removal of that element from * the {@code ConcurrentLinkedQueue} in another thread. * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @since 1.5 * @author Doug Lea * @param <E> the type of elements held in this collection * */public class ConcurrentLinkedQueue<E> extends AbstractQueue<E>        implements Queue<E>, java.io.Serializable {    private static final long serialVersionUID = 196745693267521676L;    /*     * This is a modification of the Michael & Scott algorithm,     * adapted for a garbage-collected environment, with support for     * interior node deletion (to support remove(Object)). For     * explanation, read the paper.     *     * Note that like most non-blocking algorithms in this package,     * this implementation relies on the fact that in garbage      * collected systems, there is no possibility of ABA problems due     * to recycled nodes, so there is no need to use "counted     * pointers" or related techniques seen in versions used in     * non-GC'ed settings.     *     * The fundamental invariants are:     * - There is exactly one (last) Node with a null next reference,     * which is CASed when enqueueing. This last Node can be     * reached in O(1) time from tail, but tail is merely an     * optimization - it can always be reached in O(N) time from     * head as well.     * - The elements contained in the queue are the non-null items in     * Nodes that are reachable from head. CASing the item     * reference of a Node to null atomically removes it from the     * queue. Reachability of all elements from head must remain     * true even in the case of concurrent modifications that cause     * head to advance. A dequeued Node may remain in use     * indefinitely due to creation of an Iterator or simply a     * poll() that has lost its time slice.     *     * The above might appear to imply that all Nodes are GC-reachable     * from a predecessor dequeued Node. That would cause two problems:     * - allow a rogue Iterator to cause unbounded memory retention     * - cause cross-generational linking of old Nodes to new Nodes if     * a Node was tenured while live, which generational GCs have a     * hard time dealing with, causing repeated major collections.     * However, only non-deleted Nodes need to be reachable from     * dequeued Nodes, and reachability does not necessarily have to     * be of the kind understood by the GC. We use the trick of     * linking a Node that has just been dequeued to itself. Such a     * self-link implicitly means to advance to head.     *     * Both head and tail are permitted to lag. In fact, failing to     * update them every time one could is a significant optimization     * (fewer CASes). This is controlled by local "hops" variables     * that only trigger helping-CASes after experiencing multiple     * lags.     *     * Since head and tail are updated concurrently and independently,     * it is possible for tail to lag behind head (why not)?     *     * CASing a Node's item reference to null atomically removes the     * element from the queue. Iterators skip over Nodes with null     * items. Prior implementations of this class had a race between     * poll() and remove(Object) where the same element would appear     * to be successfully removed by two concurrent operations. The     * method remove(Object) also lazily unlinks deleted Nodes, but     * this is merely an optimization.     *     * When constructing a Node (before enqueuing it) we avoid paying     * for a volatile write to item by using lazySet instead of a     * normal write. This allows the cost of enqueue to be     * "one-and-a-half" CASes.     *     * Both head and tail may or may not point to a Node with a     * non-null item. If the queue is empty, all items must of course     * be null. Upon creation, both head and tail refer to a dummy     * Node with null item. Both head and tail are only updated using     * CAS, so they never regress, although again this is merely an     * optimization.      */    private static class Node<E> {        private volatile E item;        private volatile Node<E> next;        Node(E item) {            // Piggyback on imminent casNext()            lazySetItem(item);         }         E getItem() {            return item;        }        boolean casItem(E cmp, E val) {            return UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val);        }        void setItem(E val) {             item = val;        }        void lazySetItem(E val) {            UNSAFE.putOrderedObject(this, itemOffset, val);        }        void lazySetNext(Node<E> val) {            UNSAFE.putOrderedObject(this, nextOffset, val);        }         Node<E> getNext() {            return next;        }        boolean casNext(Node<E> cmp, Node<E> val) {            return UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);        }        // Unsafe mechanics        private static final sun.misc.Unsafe UNSAFE =        sun.misc.Unsafe.getUnsafe();        private static final long nextOffset =        objectFieldOffset(UNSAFE, "next", Node.class);        private static final long itemOffset =        objectFieldOffset(UNSAFE, "item", Node.class);    }    /**    * A node from which the first live (non-deleted) node (if any)    * can be reached in O(1) time.    * Invariants:    * - all live nodes are reachable from head via succ()    * - head != null    * - (tmp = head).next != tmp || tmp != head    * Non-invariants:    * - head.item may or may not be null.    * - it is permitted for tail to lag behind head, that is, for tail    * to not be reachable from head!    */    private transient volatile Node<E> head = new Node<E>(null);    /**    * A node from which the last node on list (that is, the unique    * node with node.next == null) can be reached in O(1) time.    * Invariants:    * - the last node is always reachable from tail via succ()    * - tail != null    * Non-invariants:    * - tail.item may or may not be null.    * - it is permitted for tail to lag behind head, that is, for tail    * to not be reachable from head!    * - tail.next may or may not be self-pointing to tail.    */     private transient volatile Node<E> tail = head;    /**     * Creates a {@code ConcurrentLinkedQueue} that is initially empty.     */    public ConcurrentLinkedQueue() {}    /**     * Creates a {@code ConcurrentLinkedQueue}     * initially containing the elements of the given collection,     * added in traversal order of the collection's iterator.     * @param c the collection of elements to initially contain     * @throws NullPointerException if the specified collection or any     *         of its elements are null     */    public ConcurrentLinkedQueue(Collection<? extends E> c) {        for (Iterator<? extends E> it = c.iterator(); it.hasNext();)            add(it.next());    }    // Have to override just to update the javadoc    /**     * Inserts the specified element at the tail of this queue.     *     * @return {@code true}  (as specified by {@link Collection#add})     * @throws NullPointerException if the specified element is null     */    public boolean add(E e) {        return offer(e);    }    /**     * We don't bother to update head or tail pointers if fewer than     * HOPS links from "true" location. We assume that volatile     * writes are significantly more expensive than volatile reads.     */     private static final int HOPS = 1;    /**     * Try to CAS head to p. If successful, repoint old head to itself     * as sentinel for succ(), below.     */     final void updateHead(Node<E> h, Node<E> p) {         if (h != p && casHead(h, p))             h.lazySetNext(h);     }        /**     * Returns the successor of p, or the head node if p.next has been     * linked to self, which will only be true if traversing with a     * stale pointer that is now off the list.     */     final Node<E> succ(Node<E> p) {         Node<E> next = p.getNext();         return (p == next) ? head : next;     }     /**     * Inserts the specified element at the tail of this queue.     *     * @return {@code true} (as specified by {@link Queue#offer})     * @throws NullPointerException if the specified element is null     */    public boolean offer(E e) {        if (e == null) throw new NullPointerException();        Node<E> n = new Node<E>(e);        retry:        for (;;) {            Node<E> t = tail;            Node<E> p = t;            for (int hops = 0; ; hops++) {                Node<E> next = succ(p);                if (next != null) {                    if (hops > HOPS && t != tail)                        continue retry;                    p = next;                } else if (p.casNext(null, n)) {                    if (hops >= HOPS)                        casTail(t, n); // Failure is OK.                    return true;                   } else {                    p = succ(p);                }            }        }    }    public E poll() {        Node<E> h = head;        Node<E> p = h;        for (int hops = 0; ; hops++) {            E item = p.getItem();            if (item != null && p.casItem(item, null)) {                if (hops >= HOPS) {                    Node<E> q = p.getNext();                    updateHead(h, (q != null) ? q : p);                }                return item;            }            Node<E> next = succ(p);            if (next == null) {                updateHead(h, p);                break;            }            p = next;        }        return null;    }    public E peek() {        Node<E> h = head;        Node<E> p = h;        E item;         for (;;) {            item = p.getItem();            if (item != null)                break;            Node<E> next = succ(p);            if (next == null) {                break;            }            p = next;        }        updateHead(h, p);        return item;    }     /**     * Returns the first live (non-deleted) node on list, or null if none.     * This is yet another variant of poll/peek; here returning the     * first node, not element. We could make peek() a wrapper around     * first(), but that would cost an extra volatile read of item,     * and the need to add a retry loop to deal with the possibility     * of losing a race to a concurrent poll().     */     Node<E> first() {         Node<E> h = head;         Node<E> p = h;         Node<E> result;         for (;;) {              E item = p.getItem();             if (item != null) {                 result = p;                 break;             }             Node<E> next = succ(p);             if (next == null) {                 result = null;                 break;             }             p = next;         }         updateHead(h, p);         return result;     }   /**    * Returns {@code true} if this queue contains no elements.    *    * @return {@code true} if this queue contains no elements    */     public boolean isEmpty() {        return first() == null;    }    /**     * Returns the number of elements in this queue.  If this queue     * contains more than {@code Integer.MAX_VALUE} elements, returns     * {@code Integer.MAX_VALUE}.     *     * <p>Beware that, unlike in most collections, this method is     * <em>NOT</em> a constant-time operation. Because of the     * asynchronous nature of these queues, determining the current     * number of elements requires an O(n) traversal.     *     * @return the number of elements in this queue     */    public int size() {        int count = 0;        for (Node<E> p = first(); p != null; p = succ(p)) {            if (p.getItem() != null) {                // Collections.size() spec says to max out                if (++count == Integer.MAX_VALUE)                    break;            }        }        return count;    }    /**     * Returns {@code true} if this queue contains the specified element.     * More formally, returns {@code true} if and only if this queue contains     * at least one element {@code e} such that {@code o.equals(e)}.     *     * @param o object to be checked for containment in this queue     * @return {@code true} if this queue contains the specified element     */    public boolean contains(Object o) {        if (o == null) return false;        for (Node<E> p = first(); p != null; p = succ(p)) {            E item = p.getItem();            if (item != null &&                o.equals(item))                return true;        }        return false;    }    /**     * Removes a single instance of the specified element from this queue,     * if it is present.  More formally, removes an element {@code e} such     * that {@code o.equals(e)}, if this queue contains one or more such     * elements.     * Returns {@code true} if this queue contained the specified element     * (or equivalently, if this queue changed as a result of the call).     *     * @param o element to be removed from this queue, if present     * @return {@code true} if this queue changed as a result of the call     */    public boolean remove(Object o) {        if (o == null) return false;        Node<E> pred = null;        for (Node<E> p = first(); p != null; p = succ(p)) {            E item = p.getItem();            if (item != null &&                o.equals(item) &&                p.casItem(item, null)){                Node<E> next = succ(p);                if(pred != null && next != null)                   pred.casNext(p, next);                return true;            }            pred = p;        }        return false;    }    /**     * Returns an array containing all of the elements in this queue, in     * proper sequence.     *     * <p>The returned array will be "safe" in that no references to it are     * maintained by this queue.  (In other words, this method must allocate     * a new array).  The caller is thus free to modify the returned array.     *     * <p>This method acts as bridge between array-based and collection-based     * APIs.     *     * @return an array containing all of the elements in this queue     */    public Object[] toArray() {        // Use ArrayList to deal with resizing.        ArrayList<E> al = new ArrayList<E>();        for (Node<E> p = first(); p != null; p = succ(p)) {            E item = p.getItem();            if (item != null)                al.add(item);        }        return al.toArray();    }    /**     * Returns an array containing all of the elements in this queue, in     * proper sequence; the runtime type of the returned array is that of     * the specified array.  If the queue fits in the specified array, it     * is returned therein.  Otherwise, a new array is allocated with the     * runtime type of the specified array and the size of this queue.     *     * <p>If this queue fits in the specified array with room to spare     * (i.e., the array has more elements than this queue), the element in     * the array immediately following the end of the queue is set to     * {@code null}.     *     * <p>Like the {@link #toArray()} method, this method acts as bridge between     * array-based and collection-based APIs.  Further, this method allows     * precise control over the runtime type of the output array, and may,     * under certain circumstances, be used to save allocation costs.     *     * <p>Suppose {@code x} is a queue known to contain only strings.     * The following code can be used to dump the queue into a newly     * allocated array of {@code String}:     *     * <pre>     *     String[] y = x.toArray(new String[0]);</pre>     *     * Note that {@code toArray(new Object[0])} is identical in function to     * {@code toArray()}.     *     * @param a the array into which the elements of the queue are to     *          be stored, if it is big enough; otherwise, a new array of the     *          same runtime type is allocated for this purpose     * @return an array containing all of the elements in this queue     * @throws ArrayStoreException if the runtime type of the specified array     *         is not a supertype of the runtime type of every element in     *         this queue     * @throws NullPointerException if the specified array is null     */    @SuppressWarnings("unchecked")    public <T> T[] toArray(T[] a) {        // try to use sent-in array        int k = 0;        Node<E> p;        for (p = first(); p != null && k < a.length; p = succ(p)) {            E item = p.getItem();            if (item != null)                a[k++] = (T)item;        }        if (p == null) {            if (k < a.length)                a[k] = null;            return a;        }        // If won't fit, use ArrayList version        ArrayList<E> al = new ArrayList<E>();        for (Node<E> q = first(); q != null; q = succ(q)) {            E item = q.getItem();            if (item != null)                al.add(item);        }        return al.toArray(a);    }    /**     * Returns an iterator over the elements in this queue in proper sequence.     * The returned iterator is a "weakly consistent" iterator that     * will never throw {@link  java.util.ConcurrentModificationException     * ConcurrentModificationException},     * and guarantees to traverse elements as they existed upon     * construction of the iterator, and may (but is not guaranteed to)     * reflect any modifications subsequent to construction.     *     * @return an iterator over the elements in this queue in proper sequence     */    public Iterator<E> iterator() {        return new Itr();    }    private class Itr implements Iterator<E> {        /**         * Next node to return item for.         */        private Node<E> nextNode;        /**         * nextItem holds on to item fields because once we claim         * that an element exists in hasNext(), we must return it in         * the following next() call even if it was in the process of         * being removed when hasNext() was called.         */        private E nextItem;        /**         * Node of the last returned item, to support remove.         */        private Node<E> lastRet;        Itr() {            advance();        }        /**         * Moves to next valid node and returns item to return for         * next(), or null if no such.         */        private E advance() {            lastRet = nextNode;            E x = nextItem;            Node<E> pred, p;            if (nextNode == null) {                p = first();                pred = null;            } else {                pred = nextNode;                p = succ(nextNode);            }             for (;;) {                if (p == null) {                    nextNode = null;                    nextItem = null;                    return x;                }                E item = p.getItem();                if (item != null) {                    nextNode = p;                    nextItem = item;                    return x;                } else {                    // skip over nulls                    Node<E> next = succ(p);                    if (pred != null && next != null)                        pred.casNext(p, next);                    p = next;                 }            }        }        public boolean hasNext() {            return nextNode != null;        }        public E next() {            if (nextNode == null) throw new NoSuchElementException();            return advance();        }        public void remove() {            Node<E> l = lastRet;            if (l == null) throw new IllegalStateException();            // rely on a future traversal to relink.            l.setItem(null);            lastRet = null;        }    }    /**     * Save the state to a stream (that is, serialize it).     *     * @serialData All of the elements (each an {@code E}) in     * the proper order, followed by a null     * @param s the stream     */    private void writeObject(java.io.ObjectOutputStream s)        throws java.io.IOException {        // Write out any hidden stuff        s.defaultWriteObject();        // Write out all elements in the proper order.        for (Node<E> p = first(); p != null; p = succ(p)) {            Object item = p.getItem();            if (item != null)                s.writeObject(item);        }        // Use trailing null as sentinel        s.writeObject(null);    }    /**     * Reconstitute the Queue instance from a stream (that is,     * deserialize it).     * @param s the stream     */    private void readObject(java.io.ObjectInputStream s)        throws java.io.IOException, ClassNotFoundException {        // Read in capacity, and any hidden stuff        s.defaultReadObject();        head = new Node<E>(null);        tail = head;        // Read in all elements and place in queue        for (;;) {            @SuppressWarnings("unchecked")            E item = (E)s.readObject();            if (item == null)                break;            else                offer(item);        }    }    // Unsafe mechanics    private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe();    private static final long headOffset =        objectFieldOffset(UNSAFE, "head", ConcurrentLinkedQueue.class);    private static final long tailOffset =        objectFieldOffset(UNSAFE, "tail", ConcurrentLinkedQueue.class);    private boolean casTail(Node<E> cmp, Node<E> val) {       return UNSAFE.compareAndSwapObject(this, tailOffset, cmp, val);    }    private boolean casHead(Node<E> cmp, Node<E> val) {        return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val);    }    private void lazySetHead(Node<E> val) {        UNSAFE.putOrderedObject(this, headOffset, val);    }    static long objectFieldOffset(sun.misc.Unsafe UNSAFE,                                  String field, Class<?> klazz) {       try {           return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));       } catch (NoSuchFieldException e) {            // Convert Exception to corresponding Error            NoSuchFieldError error = new NoSuchFieldError(field);            error.initCause(e);            throw error;       }    } }


高并发环境下优化锁或无锁(lock-free)的设计思路

服务端编程的3大性能杀手:1、大量线程导致的线程切换开销。2、锁。3、非必要的内存拷贝。在高并发下,对于纯内存操作来说,单线程是要比多线程快的, 可以比较一下多线程程序在压力测试下cpu的sy和ni百分比。高并发环境下要实现高吞吐量和线程安全,两个思路:一个是用优化的锁实现,一个是lock-free的无锁结构。但非阻塞算法要比基于锁的算法复杂得多。开发非阻塞算法是相当专业的训练,而且要证明算法的正确也极为困难,不仅和具体的目标机器平台和编译器相关,而且需要复杂的技巧和严格的测试。虽然Lock-Free编程非常困难,但是它通常可以带来比基于锁编程更高的吞吐量。所以Lock-Free编程是大有前途的技术。它在线程中止、优先级倒置以及信号安全等方面都有着良好的表现。

  • 优化锁实现的例子:Java中的ConcurrentHashMap,设计巧妙,用桶粒度的锁和锁分离机制,避免了put和get中对整个map的锁定,尤其在get中,只对一个HashEntry做锁定操作,性能提升是显而易见的(详细分析见《探索 ConcurrentHashMap 高并发性的实现机制》)。
  • Lock-free无锁的例子:CAS(CPU的Compare-And-Swap指令)的利用和LMAX的disruptor无锁消息队列数据结构等。有兴趣了解LMAX的disruptor无锁消息队列数据结构的可以移步slideshare。

disruptor无锁消息队列数据结构的类图和技术文档下载

2014-02-12_16h55_36

另外,在设计思路上除了尽量减少资源争用以外,还可以借鉴nginx/node.js等单线程大循环的机制,用单线程或CPU数相同的线程开辟大的队列,并发的时候任务压入队列,线程轮询然后一个个顺序执行。由于每个都采用异步I/O,没有阻塞线程。这个大队列可以使用RabbitMQueue,或是JDK的同步队列(性能稍差),或是使用Disruptor无锁队列(Java)。任务处理可以全部放在内存(多级缓存、读写分离、ConcurrentHashMap、甚至分布式缓存Redis)中进行增删改查。最后用Quarz维护定时把缓存数据同步到DB中。当然,这只是中小型系统的思路,如果是大型分布式系统会非常复杂,需要分而治理,用SOA的思路,参考这篇文章的图。(注:Redis是单线程的纯内存数据库,单线程无需锁,而Memcache是多线程的带CAS算法,两者都使用epoll,no-blocking io)

png;base643f17317a5d7e7fe9

深入JVM的OS的无锁非阻塞算法

如果深入 JVM 和操作系统,会发现非阻塞算法无处不在。垃圾收集器使用非阻塞算法加快并发和平行的垃圾搜集;调度器使用非阻塞算法有效地调度线程和进程,实现内在锁。在 Mustang(Java 6.0)中,基于锁的SynchronousQueue 算法被新的非阻塞版本代替。很少有开发人员会直接使用SynchronousQueue,但是通过Executors.newCachedThreadPool() 工厂构建的线程池用它作为工作队列。比较缓存线程池性能的对比测试显示,新的非阻塞同步队列实现提供了几乎是当前实现 3 倍的速度。在 Mustang 的后续版本(代码名称为 Dolphin)中,已经规划了进一步的改进。

参考文献

IBM developerworks: Java theory and practice: Going atomic

0 0
原创粉丝点击