The observed behavior is well defined and expected to occur, but one should not expected it to occur often.
Asio has a limited pool of strand implementations, and the default allocation strategy for strands is hashing. If a hash collision occurs, two strands will use the same implementation. When a hash-collision occurs, the example simplifies to the following demo:
#include <cassert>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service;
boost::asio::io_service::strand strand1(io_service);
// Have strand2 use the same implementation as strand1.
boost::asio::io_service::strand strand2(strand1);
int value = 0;
auto handler1 = [&value, &strand1, &strand2]() {
assert(strand1.running_in_this_thread());
assert(strand2.running_in_this_thread());
value = 1;
// handler2 is queued into strand and never invoked.
auto handler2 = [&value]() { assert(false); };
strand2.post(handler2);
// handler3 is immediately executed.
auto handler3 = [&value]() { value = 3; };
strand2.dispatch(handler3);
assert(value == 3);
};
// Enqueue handler1.
strand1.post(handler1);
// Run the event processing loop, executing handler1.
assert(io_service.poll_one() == 1);
}
In the above example:
io_service.poll_one()
executes a single ready handler (handler1
)
handler2
is never invoked
handler3
is invoked immediately within strand2.dispatch()
, as strand2.dispatch()
is invoked from within a handler where strand2.running_in_this_thread()
returns true
There are various details contributing to the observed behavior:
io_service::poll_one()
will run the io_service
's event loop and without blocking, it will execute at most one ready to run handler. Handlers executed immediately within the context of a dispatch()
are never enqueued into the io_service
, and are not subject to poll_one()
's limit of invoking a single handler.
The boost::asio::spawn(strand, function)
overload starts a stackful coroutine as-if by strand.dispatch()
:
- if
strand.running_in_this_thread()
returns false
for the caller, then the coroutine will be posted into the strand
for deferred invocation
- if
strand.running_in_this_thread()
returns true
for the caller, then the coroutine will be executed immediately
Discrete strand
objects that use the same implementation still maintain the guarantees of a strand. Namely, concurrent execution will not occur and the order of handler invocation is well defined. When discrete strand
objects are using discrete implementations, and multiple threads are running the io_service
, then one may observe the discrete strands executing concurrently. However, when discrete strand
objects use the same implementation, one will not observe concurrency even if multiple threads are running the io_service
. This behavior is documented:
The implementation makes no guarantee that handlers posted or dispatched through different strand objects will be invoked concurrently.
Asio has a limited pool of strand implementations. The current default is 193
and can be controlled by defining BOOST_ASIO_STRAND_IMPLEMENTATIONS
to the desired number. This feature is noted in the Boost.Asio 1.48 release notes
Made the number of strand implementations configurable by defining BOOST_ASIO_STRAND_IMPLEMENTATIONS
to the desired number.
By decreasing the pool size, one increases the chance that two discrete strands will use the same implementation. With the original code, if one was to set the pool size to 1
, then strand1
and strand2
will always use the same implementation, resulting in val
always being 3
(demo).
The default strategy for allocating strand implementations is to use a golden-ratio hash. As a hashing algorithm is used, there is a potential for collisions, resulting in the same implementation being used for multiple discrete strand
objects. By defining BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
, one can change the allocation strategy to round-robin, preventing a collision from occurring until BOOST_ASIO_STRAND_IMPLEMENTATIONS + 1
strand allocations have occurred. This feature is noted in the Boost.Asio 1.48 release notes:
Added support for a new BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION
flag which switches the allocation of strand implementations to use a round-robin approach rather than hashing.
Given the above details, the following occurs when 1
is observed in the original code:
On the other hand, when 3
is observed:
- a hash-collision occurs for
strand1
and strand2
, resulting in them using the same underlying strand implementation
io_service::poll_one()
executes the single handler that was posted directly into strand1
- the handler that was posted into
strand1
sets val
to 1
- the handler posted into
strand2
is enqueued and never invoked
- the coroutine is immediately created and invoked within
boost::asio::spawn()
, setting val
to 3
, as strand2
can safely execute the coroutine while maintaining the guarantee of non-concurrent execution and order of handler invocation