《Java Generics and Collections》笔记-Queue II

来源:互联网 发布:免费域名邮箱 编辑:程序博客网 时间:2024/06/05 12:56
14.2. Implementing Queue

14.2.1. PriorityQueue
Your application will dictate which alternative to choose: if it needs toexamine and manipulate the set of waiting tasks, use NavigableSet. If its main requirementis efficient access to the next task to be performed, use PriorityQueue.
Choosing PriorityQueue allows us to reconsider the ordering:since it accommodates duplicates, it does not share the requirement of NavigableSet for an ordering consistent with equals. To emphasize the point, we will define a new ordering for our to-do manager that depends only on priorities. Contrary to what you might expect, PriorityQueue gives no guarantee of how it presents multiple elements with the same value. So if, in our example, several tasks are tied for the highest priority in the queue, it will choose one of them arbitrarily as the head element.

Priority queues are usually efficiently implemented bypriority heaps. A priority heap is a binary tree somewhat like those we saw implementing TReeSet in Section 13.2.1, but with two differences: first, the only ordering constraint is that each node in the tree should be larger than either of its children, and second, that the tree should be complete at every level except possibly the lowest; if the lowest level is incomplete,the nodes it contains must be grouped together at the left. Figure 14.3(a) shows a small priority heap, with each node shown only by the field containing its priority. To add a new element to a priority heap, it is first attached at the leftmost vacant position, as shown by the circled node in Figure 14.3(b). Then it is repeatedly exchanged with its parent until it reaches a parent that has higher priority.
Apart from constant overheads, both addition and removal of elements require a number of operations proportional to the height of the tree. So PriorityQueue providesO(log n) time for offer, poll, remove(), and add. The methods remove(Object) and contains may require the entire tree to be traversed,so they require O(n) time(因为是基于数组的). The methods peek and element, which just retrieve the root of the tree without removing it, take constant time, as does size, which uses an object field that is continually updated.
PriorityQueue is not suitable for concurrent use. Its iterators arefail-fast, and it doesn't offer support for client-side locking. A thread-safe version,PriorityBlockingQueue (see Section 14.3.2), is provided instead.

1。这里客户端可以通过重新包装PriorityQueue实现线程安全吗

14.2.2. ConcurrentLinkedQueue


The other nonblocking Queue implementation is ConcurrentLinkedQueue, an unbounded, thread-safe,FIFO-ordered queue. It uses a linked structure, similar to those we saw in Section 13.2.2 as the basis for skip lists, and in Section 13.1.1 for hash table overflow chaining. We noticed there that one of the main attractions of linked structures is that the insertion and removal operations implemented by pointer rearrangements perform in constant time. This makes them especially useful as queue implementations, where these operations are always required on cells at the ends of the structurethat is, cells that do not need to be located using the slow sequential search of linked structures.
ConcurrentLinkedQueue uses a CAS-based wait-free algorithmthat is, one that guarantees that any thread can always complete its current operation, regardless of the state of other threads accessing the queue. It executes queue insertion and removal operations in constant time,but requires linear time to execute size. This is because the algorithm, which relies on co-operation between threads for insertion and removal, does not keep track of the queue size and has to iterate over the queue to calculate it when it is required.
ConcurrentLinkedQueue has the two standard constructors discussed in Section 12.3. Its iterators areweakly consistent.
2.怎么实现不同的线程可以同时完成操作的?
3.执行size操作时加锁没有?

14.3. BlockingQueue

Java 5 added a number of classes to the Collections Framework for use in concurrent applications. Most of these are implementations of the Queuesubinterface BlockingQueue (see Figure 14.5), designed primarily to be used inproducer-consumer queues.
So, for example, a print server does not need to constantly poll the queue to discover whether any print jobs are waiting; it need only call the poll method, supplying a timeout, and the systemwill suspend it until either a queue element becomes available or the timeout expires.
这一点上与观察者模式有所不同。这里是挂起等待,而观察者模式则不会挂起,观察者模式往往采用异步处理方式
The nonblocking overload of offer defined in Queue will return false if it cannot immediately insert the element. This new overload waits for a time specified using java.util.concurrent.TimeUnit, an Enum which allows timeouts to be defined in units such as milliseconds or seconds.

Taking these methods together with those inherited from Queue, there are four ways in which the methods for adding elements to a BlockingQueue can behave:offer returns false if it does not succeed immediately, blocking offer returns false if it does not succeed within its timeout, add throws anexception if it does not succeed immediately, and put blocks until it succeeds.

Again taking these methods together with those inherited from Queue, there are four ways in which the methods for removing elements from a BlockingQueue can behave:poll returns null if it does not succeed immediately, blockingpoll returns null if it does not succeed within its timeout,remove throws an exception if it does not succeed immediately, andtake blocks until it succeeds。
BlockingQueue guarantees that the queue operations of its implementations will bethread-safe and atomic. But this guarantee doesn't extend to the bulk operations inherited from Collection addAll, containsAll, retainAll and removeAll unless the individual implementation provides it. So it is possible, for example, for addAll to fail, throwing an exception, after adding only some of the elements in a collection.

14.3.2. Implementing BlockingQueue

14.3.2.1. LinkedBlockingQueue

This class is a thread-safe, FIFO-ordered queue, based on a linked node structure. It is the implementation of choice whenever you need an unbounded blocking queue. Even for bounded use, it may still be better than ArrayBlockingQueue (linked queues typically have higher throughput than array-based queues but less predictable performance in most concurrent applications).

14.3.2.2. ArrayBlockingQueue

This implementation is based on acircular arraya linear structure in which the first and last elements are logically adjacent. Figure 14.6(a) shows the idea. The position labeled "head" indicates the head of the queue; each time the head element is removed from the queue, the head index is advanced. Similarly, each new element is added at the tail position, resulting in that index being advanced.When either index needs to be advanced past the last element of the array, it gets the value 0. If the two indiceshave the same value, the queue is either full or empty, so an implementation must separatelykeep track of the count of elements in the queue.
A circular array in which the head and tail can be continuously advanced in this way this is better as a queue implementation than a noncircular one (e.g., the standard implementation of ArrayList, which we cover in Section 15.2) in which removing the head element requires changing the position of all the remaining elements so that the new head is at position 0. Notice, though, thatonly the elements at the ends of the queue can be inserted and removed in constant time. If an element is to be removedfrom near the middle, which can be done for queues via the method Iterator.remove, then all the elements from one end must be moved along to maintain a compact representation. Figure 14.6(b) shows the element at index 6 being removed from the queue. As a result, insertion and removal of elements in the middle of the queue has time complexity O(n).
这里提到了ArrayBlockingQueue效率要比ArrayList高。
ArrayBlockingQueue(int capacity)
ArrayBlockingQueue(int capacity, boolean fair)
ArrayBlockingQueue(int capacity, boolean fair, Collection<? extends E> c)

The ordering imposed by ArrayBlockingQueue is FIFO. Queue insertion and removal are executed in constant time; operations such as contains which require traversal of the array require linear time. The iterators are weakly consistent. 。

14.3.2.3. PriorityBlockingQueue

This implementation is a thread-safe, blocking version of PriorityQueue (see Section 14.2), with similar ordering and performance characteristics. Its iterators are fail-fast,so in normal use they will throw ConcurrentModificationException; only if the queue is quiescent will they succeed.To iterate safely over a PriorityBlockingQueue, transfer the elements to an array and iterate over that instead.
蓝色句子有点反直觉,我们说PriorityQueue是线程安全的,但是这里说他可能抛出ConcurrentModificationException。其实这并没什么不正常的,遍历操作并没有实现为线程安全。

5.转为数组的过程中,会加锁吗?如果转为数组后元素发生了改变,遍历还有意义吗?一般什么场景下需要遍历操作

14.3.2.4. DelayQueue

This is a specialized priority queue, in whichthe ordering is based on the delay time for each element the time remaining before the element will be ready to be taken from the queue. If all elements have apositive delay time that is, none of their associated delay times has expired an attempt to poll the queue willreturn null (although peek will still allow you to see the first unexpired element). If one or more elements has anexpired delay time, the one with the longest-expired delay time will be at the head of the queue. The elements of a DelayQueue belong to a class that implements java.util.concurrent.Delayed

DelayQueue shares the performance characteristics of the PriorityQueue on which it is based and, like it, has fail-fast iterators. The comments on PriorityBlockingQueue iterators apply to these too.
他是线程不安全的?

14.3.2.5. SynchronousQueue


At first sight, you might think there is little point to a queue with no internal capacity, which is a short description of SynchronousQueue. But, in fact, it can be very useful;a thread that wants to add an element to a SynchronousQueue must wait until another thread is ready to simultaneously take it off, and the same is true in reverse for a thread that wants to take an element off the queue. So SynchronousQueue has the function that its name suggests, that of a rendezvous a mechanism for synchronizing two threads. (Don't confuse the concept of synchronizing threads in this wayallowing them to cooperate by exchanging data with Java's keyword synchronized, which prevents simultaneous execution of code by different threads.)
A common application for SynchronousQueue is in work-sharing systems where the design ensures that there areenough consumer threads to ensure that producer threads can hand tasks over without having to wait. In this situation, it allows safe transfer of task data between threadswithout incurring the BlockingQueue overhead of enqueuing, then dequeuing, each task being transferred.
As far as the Collection methods are concerned, a SynchronousQueue behaves like an empty Collection; Queue and BlockingQueue methods behave as you would expect for a queue withzero capacity, which is therefore always empty. The iterator method returns an empty iterator, in which hasNext always returns false.

14.4. Deque
a Deque is always a FIFO structure; the contract does not allow for, say, priority deques.If elements are removed from the same end (either head or tail) at which they were added, a Deque acts as a stack or LIFO (Last In First Out) structure.
Deque and its subinterface BlockingDeque were introduced in Java 6. The fast Deque implementation ArrayDeque uses a circular array (see Section 14.3.2),and is now the implementation of choice for stacks and queues. Concurrent deques have a special role to play in parallelization, discussed in Section 14.4.2.

14.4.1. Implementing Deque

14.4.1.1. ArrayDeque
Along with the interface Deque, Java 6 also introduced a very efficient implementation,ArrayDeque, based on a circular array like that of ArrayBlockingQueue (see Section 14.3.2).It fills a gap among Queue classes; previously, if you wanted a FIFO queue to use in a single-threaded environment, you would have had to use the class LinkedList (which we cover next, but whichshould be avoided as a general-purpose Queue implementation), or else pay an unnecessary overhead for thread safety with one of the concurrent classes ArrayBlockingQueue or LinkedBlockingQueue.ArrayDeque is now the general-purpose implementation of choice, for both deques and FIFO queues. It has the performance characteristics of a circular array: adding or removing elements at the head or tail takes constant time. The iterators are fail-fast.
在没有ArrayDeque之前我们要使用FIFO队列,一般可以通过LinkedList来实现,或者忍受不必要的线程安全的代价来使用ArrayBlockingQueue或者LinkedBlockingQueue来代替,但是这些效率都没有ArrayDeque高。

14.4.1.2. LinkedList

Among Deque implementations LinkedList is anoddity; for example, it is alone in permitting null elements, which are discouraged by the Queue interface because of the common use of null as a special value. It has been in the Collections Framework from the start, originally as one of the standard implementations of List (see Section 15.2), and was retrofitted with the methods of Queue for Java 5, and those of Deque for Java 6. It is based on a linked list structure similar to those we saw in Section 13.2.2 as the basis for skip lists, but with an extra field in each cell, pointing to the previous entry (see Figure 14.8). These pointers allow the list to be traversed backwards for example, for reverse iteration, or to remove an element from the end of the list.
Now, the only likely reason for using LinkedList as a queue or deque implementation would be that you also neededrandom access to the elements. With LinkedList, even that comes at a high price; because random access has to be implemented by linear search, it has time complexity of O(n).
The constructors for LinkedList are just the standard ones of Section 12.3. Its iterators are fail-fast.

14.4.2. BlockingDeque
Good load balancing algorithms will be increasingly important as multicore and multiprocessor architectures become standard. Concurrent deques are the basis of one of the best load balancing methods, work stealing. To understand work stealing, imagine a load-balancing algorithm that distributes tasks in some way round-robin, say to a series of queues, each of which has a dedicated (专用的)consumer thread that repeatedly takes a task from the head of its queue, processes it, and returns for another. Although this scheme does provide speed up through parallelism, it has a major drawback: we can imagine two adjacent queues, one with a backlog of long tasks and a consumer thread struggling to keep up with them, and next to it an empty queue with an idle consumer waiting for work. It would clearly improve throughput if we allowed the idle thread to take a task from thehead of another queue. Work stealing improves still further on this idea;observing that for the idle thread to steal work from the head of another queue risks contention(风险竞争) for the head element, it changes the queues for deques and instructs idle threads to take a task from the tail of another thread's deque. This turns out to be a highly efficient mechanism, and is becoming widely used.
第一,空闲的线程可以从别的队列中取任务;第二,从其他队列的尾部而不是头部获取任务。

14.4.2.1. Implementing BlockingDeque

The interface BlockingDeque has a single implementation, LinkedBlockingDeque. LinkedBlockingDequeis based on a doubly linked list structure like that of LinkedList. It can optionally be bounded so, besides the two standard constructors, it provides a third which can be used to specify its capacity:
LinkedBlockingDeque(int capacity)
It has similar performance characteristics to LinkedBlockingQueue queue insertion and removal take constant time and operations such as contains, which require traversal of the queue, require linear time. The iterators are weakly consistent.

14.5. Comparing Queue Implementations




offer
peek     
poll
size
PriorityQueue
O(lgn)
O(1)
O(lgn)
O(1)
ConcurrentLinkedQueue
O(1)
O(1)
O(1)
O(n)
ArrayBlockingQueue
O(1)
O(1)     
O(1)
O(1)
LinkedBlockingQueue
O(1)
O(1)
O(1)
O(1)
PriorityBlockingQueue
O(lgn)
O(1)
O(lgn)
O(1)
DelayQueue
O(lgn)
O(1)
O(lgn)
O(1)
LinkedList
O(1)
O(1)
O(1)
O(1)
ArrayDeque
O(1)
O(1)
O(1)
O(1)
LinkedBlockingDeque
O(1)
O(1)
O(1)
O(1)

只有CocurrentLinkedQueue的size()操作复杂度是O(n),其他都是O(1),因为它们维护了一个域,只要维护这个域就可以了.而对于ConcurrentLinkedQueue由于并发的原因,size是很不准确的。
In choosing a Queue, the first question to ask iswhether the implementation you choose needs to support concurrent access; if not, your choice is straightforward. ForFIFO ordering, choose ArrayDeque; for priority ordering, PriorityQueue.

If your application does demand thread safety, you next need to consider ordering. If you needpriority or delay ordering, the choice obviously must bePriorityBlockingQueue or DelayQueue, respectively. If, on the other hand, FIFO ordering is acceptable, the third question is whether you need blocking methods, as you usually will for producer-consumer problems (either because the consumers must handle an empty queue by waiting, or because you want to constrain demand on them by bounding the queue, and then producers must sometimes wait).If you don't need blocking methods or a bound on the queue size, choose the efficient and wait-free ConcurrentLinkedQueue.

2. Concurrent相关类的实现一定要专门研究一下源码。

If you do need a blocking queue, because your application requires support for producer-consumer cooperation, pause to think whether you really need tobuffer data, or whether all you need is to safely hand off data between the threads. If you can do without buffering (usually because you are confident that there will beenough consumers to prevent data from piling up), thenSynchronousQueue is an efficient alternative to the remaining FIFO blocking implementations, LinkedBlockingQueue and ArrayBlockingQueue.

Otherwise, we are finally left with the choice between these two. If you cannot fix a realisticupper bound for the queue size, then you must choose LinkedBlockingQueue, as ArrayBlockingQueue is always bounded. For bounded use, you will choose between the two on the basis of performance. Their performance characteristics in Figure 14.1 are the same, but these are only the formulae for sequential access;how they perform in concurrent use is a different question. As we mentioned above,LinkedBlockingQueue performs better on the whole than ArrayBlockingQueue if more than three or four threads are being serviced.This fits with the fact that the head and tail of a LinkedBlockingQueue are locked independently, allowing simultaneous updates of both ends. On the other hand, an ArrayBlockingQueue does not have to allocate new objects with each insertion. If queue performance is critical to the success of your application, you should measure both implementations with the benchmark that means the most to you: your application itself.

3.上面红色文字的原因。










0 0
原创粉丝点击