0

I have a web socket server on boost::beast, the simplified code is below:

Server.h

class Server
{
public:
    using WorkerData = std::pair<std::size_t, boost::asio::ip::tcp::socket*>;
    using StartAcceptCallback = std::function<WorkerData()>;
    using AcceptCallback = std::function<void(tl::expected<std::size_t, std::string>)>;

    Server(boost::asio::io_context& ioContext, const std::string& host, unsigned short port)
        : ioContext_(ioContext), acceptor_(ioContext), isPendingShutdown_(false)
    {
        // Configure and start listening
    }

    void run()
    {
        accept();
    }

    void pendingOnShutdown()
    {
        isPendingShutdown_ = true;
    }

    void shutdown()
    {
        acceptor_.cancel();
    }

    void accept()
    {
        // This callback finds lowest loaded worker and returns an pointer to socket and worker index
        const WorkerData workerData = startAcceptCallback_();
        acceptor_.async_accept(*workerData.second,
                            boost::beast::bind_front_handler(&Server::onAccept, this, workerData.first));
    }

    void onAccept(std::size_t workerIndex, boost::beast::error_code err)
    {
        if(isPendingShutdown_)
        {
            return;
        }

        if(err)
        {
            if(err != boost::asio::error::operation_aborted)
            {
                acceptCallback_(tl::unexpected{"Server::onAccept(...) error: " + err.message()});
            }
        }
        else
        {
            // Notify the worker that new connection has been accepted
            acceptCallback_(tl::expected<std::size_t, std::string>{tl::in_place, workerIndex});
        }

        accept();
    }

private:
    boost::asio::io_context& ioContext_;
    boost::asio::ip::tcp::acceptor acceptor_;
    StartAcceptCallback startAcceptCallback_;
    AcceptCallback acceptCallback_;
    bool isPendingShutdown_;
};

Worker.h

class Worker
{
public:
    using Socket = boost::asio::ip::tcp::socket;
    using IoContext = boost::asio::io_context;

    Worker()
        : socket_(ioContext_)
    {
    }

    Socket* Worker::getSocket()
    {
        return &socket_;
    }

    void Worker::run()
    {
        worker_ = std::make_unique<std::thread>(std::bind(&Worker::runImpl, this));
        worker_->detach();
    }

    void Worker::handleNewConnection()
    {
        // From this point all the operation on this socket will be performed on worker's thread
        ioContext_.post([this, socket = std::move(this->socket_)] {
            // Create new session with newly accepted socket
            // and start asynchronous read/write operations
        });
    }

    void runImpl()
    {
        // Create payload_ to keep the io_service working
        // Run io_service
    }

private:
    Socket socket_;
    IoContext ioContext_;
    std::unique_ptr<IoContext::work> payload_;
    std::unique_ptr<std::thread> worker_;
};

The boost::signal_set handler working on the main thread does something like this:

// Prevent access to the workers pool from main thread first
this->webSocketServer_->pendingOnShutdown();
// Destroys all workers
this->workersPool_->shutdown();
// Finally cancel accepting of new connections and stop the application
this->webSocketServer_->shutdown();

I have an boost::asio::ip::tcp::acceptor working asynchronously on the main thread and I have a pool of Worker objects. Each Worker does asynchronous IO operations and some other stuff, works on its own thread and has its own boost::asio::io_context. The server only accepts new connections into the socket given from lowest loaded Worker instance (and the socket works on worker's boost::asio::io_service) and then delegates performing IO operations to the worker so there are no needed any synchronizations, since the workers have not any shared data. Suppose I want to shutdown the application. I have to stop accepting of new connections before destroying the pool of Worker objects. In my code I call pendingOnShutdown() method to prevent accepting, and then I begin to shutdown the workers. After all the workers are destroyed, I call acceptor::cancel() method to cancel current async_accept(...) operation and stop the boost::asio::io_context which is running on main thread. But what will happen if I call acceptor::cancel() method immediately, when the sessions in Worker instances are still alive? Official documentation describes it as follow:

This function causes all outstanding asynchronous connect, send and receive operations to finish immediately, and the handlers for cancelled operations will be passed the boost::asio::error::operation_aborted error.

Does this method cancel the asynchronous operations on the sockets which are working on the external boost::asio::io_context? Can I call the acceptor::cancel() method immediately, without callpendingOnShutdown() first? I have an boost::signal_set working on the main thread, so don't worry that application will close before the pool of Worker objects are destroyed. I'm using boost 1.71.

phoenix76
  • 11
  • 3
  • As a general advice you should not mix different `io_context`s, plus there is no known need for more than one `io_context`. – Superlokkus Mar 16 '20 at 12:18
  • @Superlokkus The problem is that `Worker` objects contain `Redis` client, which can't be used from different threads even using explicit synchronization. And a single thread is not enough to provide a required performance. So in this case the only possible way is to have the bunch of websocket connections communicating with `Redis` client on the same thread. I can't just start a single `boost::asio::io_context` multiple times from different threads. – phoenix76 Mar 16 '20 at 13:08
  • Why should you start an `io_context` more than once? And I guess by start you mean call execute on the io_context don't you? Asio isn't designed for keeping control to the thread affinity, using multiple `io_context`s just makes it worse. So now I think we also know that this is a XY problem: Its about how to use your redis client, concurrently in a nice fashion. – Superlokkus Mar 16 '20 at 13:14
  • By start I mean call `io_context::run()`. Usual way is to have a pool of threads where each thread calls `io_context::run()` and then finally call `io_context::run()` from main thread. But I can't do this by the reasons described above. In my case I can't modify the `Redis` client by some reasons and I have to integrate it as is. – phoenix76 Mar 16 '20 at 13:48
  • 1
    ["Usual way is to have a pool of threads where each thread calls io_context::run() and then finally call io_context::run() ... " is wrong](https://stackoverflow.com/questions/60715555/io-context-run-in-a-separate-thread-blocks) – Jean Davy Mar 17 '20 at 09:58
  • "Worker objects contain Redis client, which can't be used from different threads even using explicit synchronization ..." I think you are wrong, Redis is single-threaded, but you can create multiple instance, one instance per thread with TLS. [Why does it make sense to use asynchronous clients for Redis?](https://stackoverflow.com/questions/27342508/why-does-it-make-sense-to-use-asynchronous-clients-for-redis) – Jean Davy Mar 17 '20 at 10:17

0 Answers0