Chapter3 Sharing data between threads

来源:互联网 发布:mysql安装配置 编辑:程序博客网 时间:2024/04/27 14:45

3.1 Problems with sharing data between threads

If all shared data is read-only, there’s no problem, because the data read by one thread is unaffected by whether or not another thread is reading the same data. 

In concurrency, a race condition is anything where the outcome depends on the relative  ordering  of  execution  of  operations  on  two  or  more  threads;  the  threads race  to  perform  their  respective  operations.

If you’re writing multithreaded programs, race conditions can easily be the bane of your life; a great deal of the complexity in writing software that uses concurrency comes from avoiding problematic race conditions.

There are several ways to deal with problematic race conditions. 

1.The simplest option is to wrap your data structure with a protection mechanism, to ensure that only the thread actually performing a modification can see the intermediate states where the invariants are broken. 

2. Another option is to modify the design of your data structure and its invariants so that modifications are done as a series of indivisible changes, each of which preserves the invariants. This is generally referred to as  lock-free programming  and is difficult to get right.

3.  Another way of dealing with race conditions is to handle the updates to the data structure as a transaction , just as updates to a database are done within a transaction.


3.2 Protecting shared data with mutexes

Before accessing a shared data structure, you lock  the mutex associated with that data, and when you’ve finished accessing the data structure, you unlock  the mutex.

                                        Protecting a list with a mutex

#include <list>#include <mutex>#include <algorithm>std::list<int> some_list;              std::mutex some_mutex;          void add_to_list(int new_value){    std::lock_guard<std::mutex> guard(some_mutex);       some_list.push_back(new_value);}bool list_contains(int value_to_find) {    std::lock_guard<std::mutex> guard(some_mutex);                            return std::find(some_list.begin(),some_list.end(),value_to_find)        != some_list.end();}
Don’t  pass  pointers  and  references  to  protected  data  outside  the  scope  of  the  lock,  whether  by
returning  them  from  a  function,  storing  them  in  externally  visible  memory,  or  passing  them  as arguments to user-supplied functions.                         

                                      Accidentally passing out a reference to protected data

class some_data{    int a;    std::string b;public:    void do_something();};class data_wrapper{private:    some_data data;    std::mutex m;public:    template<typename Function>    void process_data(Function func)    {        std::lock_guard<std::mutex> l(m);        func(data);                                }};some_data* unprotected;void malicious_function(some_data& protected_data){    unprotected=&protected_data;}data_wrapper x;void foo(){    x.process_data(malicious_function);       unprotected->do_something();                           }
                                An outline class definition for a thread-safe stack
#include <exception>#include <memory>                   struct empty_stack: std::exception{    const char* what() const throw();};template<typename T>class threadsafe_stack{public:    threadsafe_stack();    threadsafe_stack(const threadsafe_stack&);         threadsafe_stack& operator=(const threadsafe_stack&) = delete;       void push(T new_value);    std::shared_ptr<T> pop();    void pop(T& value);    bool empty() const;};

#include <exception>#include <memory>#include <mutex>#include <stack>struct empty_stack: std::exception{    const char* what() const throw();};template<typename T>class threadsafe_stack{private:    std::stack<T> data;    mutable std::mutex m;public:    threadsafe_stack(){}    threadsafe_stack(const threadsafe_stack& other)    {        std::lock_guard<std::mutex> lock(other.m);        data=other.data;                                 }    threadsafe_stack& operator=(const threadsafe_stack&) = delete;    void push(T new_value)    {        std::lock_guard<std::mutex> lock(m);        data.push(new_value);    }    std::shared_ptr<T> pop()    {        std::lock_guard<std::mutex> lock(m);        if(data.empty()) throw empty_stack();           std::shared_ptr<T> const res(std::make_shared<T>(data.top()));         data.pop();                                                  return res;    }    void pop(T& value)    {        std::lock_guard<std::mutex> lock(m);        if(data.empty()) throw empty_stack();        value=data.top();        data.pop();    }    bool empty() const    {        std::lock_guard<std::mutex> lock(m);        return data.empty();    }};

each  of  a  pair  of  threads  needs  to  lock  both  of  a  pair  of mutexes to perform some operation, and each thread has one mutex and is waiting for  the  other.  Neither  thread  can  proceed,  because  each  is  waiting  for  the  other  to release  its  mutex.  This  scenario  is  called deadlock,  and  it’s  the  biggest  problem  with having to lock two or more mutexes in order to perform an operation.

                             Using  std::lock() and std::lock_guard  in a swap operation

class some_big_object;void swap(some_big_object& lhs,some_big_object& rhs);class X{private:    some_big_object some_detail;    std::mutex m;public:    X(some_big_object const& sd):some_detail(sd){}    friend void swap(X& lhs, X& rhs)    {        if(&lhs==&rhs)            return;        std::lock(lhs.m,rhs.m);                       std::lock_guard<std::mutex> lock_a(lhs.m,std::adopt_lock);            std::lock_guard<std::mutex> lock_b(rhs.m,std::adopt_lock);           swap(lhs.some_detail,rhs.some_detail);    }};

std::shared_ptr<some_resource> resource_ptr;std::once_flag resource_flag;                void init_resource(){    resource_ptr.reset(new some_resource);    }void foo(){    std::call_once(resource_flag,init_resource);       resource_ptr->do_something();}


Summary
In  this  chapter  I  discussed  how  problematic  race  conditions  can  be  disastrous  when sharing data between threads and how to use  std::mutex  and careful interface design to avoid them. You saw that mutexes aren’t a panacea and do have their own problems in  the  form  of  deadlock,  though  the  C++  Standard  Library  provides  a  tool  to  help avoid that in the form of std::lock(). You then looked at some further techniques for  avoiding  deadlock,  followed  by  a  brief  look  at  transferring  lock  ownership  and issues surrounding choosing the appropriate granularity for locking. Finally, I covered the alternative data-protection facilities provided for specific scenarios, such as  std::call_once(), and  boost::shared_mutex.
0 0