Concurrent Programming 1: Shared Data and Message Passing
来源:互联网 发布:福州找淘宝兼职工作 编辑:程序博客网 时间:2024/05/07 02:29
Shared Data
The shared-memory model requires a mechanism to coordinate shared data access between the threads. This is commonly implemented using a synchronization mechanism; for example, a lock or a condition. A lock is a mechanism used to control access to data or a resource shared by multiple threads. A thread acquires a lock to a shared resource, performs operations on the resource, and then releases the lock, thereby enabling other threads to access the resource. A condition variable is a synchronization mechanism that causes a thread to wait until a specified condition occurs. Condition variables are commonly implemented using locks.
The Problems with Locks
Locks are one of the most common mechanisms used for controlling access to shared data. They enforce a mutual exclusion policy, thereby preventing concurrent access to the protected data/resource. Unfortunately, using locks to coordinate access to shared data introduces the possibility of deadlock, live-lock, or resource starvation—any of which can halt program execution. A deadlock is a situation in which two or more threads are each blocked, waiting toacquire a resource locked by another, thus preventing the blocked threads from finishing. An example of a deadlock condition is a circular wait. Figure 17-3 illustrates a deadlock condition that can occur between concurrent threads accessing shared data.
Figure 17-3. A deadlock condition between two threads accessing shared data
A live-lock is a situation where a thread is unable to progress because it is responding to the action of another thread(s). A live-locked thread is not blocked. It is spending all of its computing time responding to other threads to resume normal execution.
Resource starvation is a condition where a thread is not able to gain regular access to a shared resource, typically because it is being used by other threads and thus cannot execute as intended. This can happen if one or more other threads hold onto shared resources for aninordinate amount of time. In effect, you can look at live-lock as a form of resource starvation.
As you develop larger and more complex concurrent programs that use shared data, the potential for your code to cause a deadlock condition increases. The following are some of the most common recommendations for preventing these conditions:
- Implement a total ordering on lock acquisition. Make sure that locks are acquired and released in a fixed order. This approach requires detailed knowledge of the threaded code, and may not even be feasible for third-party software.
- Prevent hold and wait conditions. Acquire all locks at once, atomically. This requires that any time any thread grabs a lock, it first acquires the globalprevention lock. This approach eliminates the possibility of hold-and-wait scenarios, but potentially decreases concurrency and also requires detailed knowledge of the threaded code.
- Provide preemption. Use locks that provide a trylock or similar mechanism to grab a lock, if available, or return an appropriate result if not. This approach has the potential of causing livelock, and still requires detailed knowledge of how the code is using locks.
- Provide timeouts on waits. Use locks that provide a timeout feature, thereby preventing indefinite waits on a lock.
Message Passing
In the message passing, model state is not shared; instead, the threads communicate by exchanging messages. This approach enables threads to both synchronize and communicate information through message exchanges. Message passing avoids the issues surrounding mutual exclusion and it maps naturally to multiple core, multiple processor systems. Message passing can be used to perform both synchronous and asynchronous communication. With synchronous message passing, the sender and receiver are directly linked; the sender and receiver block while the message exchange is performed. Asynchronous message passing utilizes queues for message transfer, as shown in Figure 17-4.
Figure 17-4. Message passing using queues
Messages are not sent directly between threads, but rather are exchanged through message queues. Hence, the sender and receiver are decoupled and the sender does not block when it posts a message to the queue. Asynchronous message passing can be used to implement concurrent programming. In fact, the next section will cover several frameworks that do just that.
- Concurrent Programming 1: Shared Data and Message Passing
- Concurrent Programming 4:Message Passing
- Concurrent and Parallel Programming
- Lecture 22: Queues and Message-Passing
- 共享内存与消息传递 Shared memory versus message passing
- Concurrent Programming: APIs and Challenges
- CONCURRENT PROGRAMMING AND ASPECT ORIENTED PROGRAMMING
- programming and music Node.js w/1M concurrent connections!
- Concurrent Programming(1)[译]
- Designing Data Tier Components and Passing Data Through Tiers
- Designing Data Tier Components and Passing Data Through Tiers
- Shared Data and Interapplication Communication
- Concurrent and Real-Time Programming in Java
- Concurrent and Real-Time Programming in Ada
- Concurrent programming - Principles and introduction to processes
- shared memory&messae passing
- MPI Message Passing Interface
- HDU 4661 Message Passing
- C#
- win32 windows消息处理机制tagMSG
- iOS 网络 NSURL(1)
- Ubuntu-系统管理-sudo
- FreeSWITCH技巧:如何向通话的另一方号码发送dtmf?
- Concurrent Programming 1: Shared Data and Message Passing
- 消息相关的函数 GetMessage - 获取消息
- 为什么Java的String对象不能改变
- 全排列
- 消息相关的函数TranslateMessage - 翻译消息
- LTE Rel-12及其后续演进系统中的MIMO技术
- UVA 401
- 【菜鸟入门】stm32 之 DAC
- R12.2应用启动关闭脚本