boost::asio 连接管理8

来源:互联网 发布:想跟女朋友分手知乎 编辑:程序博客网 时间:2024/05/22 00:20

到上一篇,一个完整的单线程版本就出现了。如果并发要求不高的话,单线程+异步I/O已经足够了。

但是如果想支持大并发,自然需要尽可能的利用服务器上的多个CPU和核。

现在首先把之前的工程改成cmake工程。

顶层目录的CMakeLists.txt内容如下:

cmake_minimum_required(VERSION 2.8)project (TcpTemplate)add_subdirectory(src bin)

src目录下的CMakeLists.txt文件配置:

cmake_minimum_required(VERSION 2.8)set(CMAKE_BUILD_TYPE Debug)set(PROJECT_INCLUDE_DIR ../include)find_package(Boost COMPONENTS system filesystem thread REQUIRED)include_directories(${Boost_INCLUDE_DIR} ${PROJECT_INCLUDE_DIR})AUX_SOURCE_DIRECTORY(${CMAKE_SOURCE_DIR}/src CPP_LIST1)AUX_SOURCE_DIRECTORY(${CMAKE_SOURCE_DIR}/src/core CPP_LIST2)add_executable(service ${CPP_LIST1} ${CPP_LIST2})target_link_libraries(service ${Boost_LIBRARIES})add_definitions(-Wall)

好,现在看一下main.cc,没有什么太大变化,只是把Server类的代码剥离出去了。

#include <iostream>#include "core/server.h"using namespace std;int main(int argc,char ** argv) {    try {io_service iosev;tcp::endpoint listen_endpoint(tcp::v4(), 8888);        Server server(iosev, listen_endpoint, 10);        server.Run();    } catch(std::exception const& ex) {      cout << "Exception: " << ex.what() << "";    }}

Server类和Connection类都放在core目录下。注意,这里为了效率,取消了Connetions类统一管理所有的connection,每个connection要么自己主动关闭连接,要么会因为io_service.stop的调用而关闭所有连接。

server.h文件代码:

#ifndef CORE_SERVER_H_#define CORE_SERVER_H_#include <boost/asio.hpp>#include "core/connection.h"using namespace boost;using boost::system::error_code;using namespace boost::asio;using ip::tcp;// Crate a thread pool for io_service// Run the io_service to accept new incoming TCP connection and handle the I/O eventsclass Server { public:  Server(io_service& io_service, tcp::endpoint const& listen_endpoint, size_t threads_number);  // Create a thread pool for io_service  // Launch io_service  void Run();  void AfterAccept(shared_ptr<Connection>& connection, error_code const& ec); private:  void Stop(); private:  io_service& io_;  boost::asio::signal_set signals_;  tcp::acceptor acceptor_;  size_t thread_pool_size_;};#endif

server.cc文件内容:

#include "core/server.h"#include <boost/bind.hpp>#include <boost/thread/thread.hpp>#include <vector>#include "core/connection.h"using namespace boost;Server::Server(io_service& s, tcp::endpoint const& listen_endpoint, size_t threads_number)  : io_(s),    signals_(s),    acceptor_(io_, listen_endpoint),    thread_pool_size_(threads_number) {  signals_.add(SIGINT);  signals_.add(SIGTERM);#if defined(SIGQUIT)  signals_.add(SIGQUIT);#endif  signals_.async_wait(bind(&Server::Stop, this));  shared_ptr<Connection> c(new Connection(io_));            acceptor_.async_accept(c->socket, bind(&Server::AfterAccept, this, c, _1));    }void Server::AfterAccept(shared_ptr<Connection>& c, error_code const& ec) {  // Check whether the server was stopped by a signal before this completion  // handler had a chance to run.  if (!acceptor_.is_open()) {    cout << "acceptor is closed" << endl;    return;  }          if (!ec) {    c->StartJob();    shared_ptr<Connection> c2(new Connection(io_));    acceptor_.async_accept(c2->socket,   boost::bind(&Server::AfterAccept, this, c2, _1));  }}void Server::Run() {  // Create a pool of threads to run all of the io_services.  vector<shared_ptr<thread> > threads;  for (size_t i = 0; i < thread_pool_size_; ++i) {    shared_ptr<thread> t(new thread(bind(&io_service::run, &io_)));    threads.push_back(t);  }  // Wait for all threads in the pool to exit.  for (std::size_t i = 0; i < threads.size(); ++i) {    threads[i]->join();  }}void Server::Stop() {  cout << "stopping" << endl;  acceptor_.close();  io_.stop();}

Run的实现变复杂了,改成了创建一个线程池,线程都绑定了io_service::run,也就意味着线程都会运行这个函数,直到io_service因为被关闭才会退出。

然后主线程会一直等待,直到所有线程都退出,自己才退出。

而Stop函数被简化了,因为没有了Connections类。


再来看Connection类的实现,connection.h文件内容:

#ifndef CORE_CONNECTION_H_#defineCORE_CONNECTION_H_#include <set>#include <algorithm>#include <vector>#include <boost/asio.hpp>#include <boost/enable_shared_from_this.hpp>using namespace boost::asio;using ip::tcp;using boost::system::error_code;using namespace boost;using namespace std;class Connection: public boost::enable_shared_from_this<Connection> {public:    Connection(io_service& s);        ~Connection();        void StartJob();        void CloseSocket();        void AfterReadChar(error_code const& ec);    public:    tcp::socket socket;    private:    vector<char> read_buffer_;    /// Strand to ensure the connection's handlers are not called concurrently.    boost::asio::io_service::strand strand_;};#endif

这里关键是多了一个strand_成员。解释一下:

刚才server创建了一个线程池,池中每个线程都调用了io_service::run方法。按照boost::asio的原则,这些线程都拥有平等的机会来调用connection中的异步I/O处理函数,也就是我的After...函数。由于这是个多线程环境,connection的有效的异步I/O处理函数(一个或者多个)可能会同时被几个线程调用。为了防止出现状态不一致,strand提供了一种承诺,如果我们的bind交由它打一次包,再传递给io_service,则这些回调会保证同一时间点仅有一个线程调用,也就是被用一根绳子(一个strand对象)按照时间顺序串起来了。


connection.cc代码:

#include "core/connection.h"#include <boost/bind.hpp>Connection::Connection(io_service& s)  : socket(s), read_buffer_(1, 0), strand_(s) {}Connection::~Connection() {    cout << "~Connection" << endl;}void Connection::StartJob() {    cout << "the new connection object is starting now." << endl;    async_read(socket, buffer(read_buffer_),       strand_.wrap(bind(&Connection::AfterReadChar, shared_from_this(), _1)));}void Connection::CloseSocket() {    cout << "closing the socket" << endl;    socket.shutdown(tcp::socket::shutdown_both);    socket.close();}void Connection::AfterReadChar(error_code const& ec) {    if (ec) {        cout << ec.message() << endl;        return;    }    char x = read_buffer_[0];    if (x == 'a') {        cout << "correct data received" << endl;        async_read(socket, buffer(read_buffer_),   strand_.wrap(bind(&Connection::AfterReadChar, shared_from_this(), _1)));    } else {        cout << "wrong data received, char is:" << (int) x << endl;        CloseSocket();    }}

好了,目前实现了线程池并发,并且保证了单个Connection对象的调用的线性化。




原创粉丝点击