This may seem a little crazy, but it's an approach I'm considering as part of a larger library, if I can be reasonably certain that it's not going to cause weird behavior.
The approach:
Run async user code with a SynchronizationContext
that dispatches to a thread pool. The user code would look something like:
async void DoSomething()
{
int someState = 2;
await DoSomethingAsync();
someState = 4;
await DoSomethingElseAsync();
// someState guaranteed to be 4?
}
I'm not certain whether access to someState
would be threadsafe. While the code would run in one "thread" such that the operations are, in fact, totally ordered, it could still be split across multiple threads beneath the hood. If my understanding is correct, ordering ought to be safe on x86, and since the variable isn't shared I won't need to worry about compiler optimizations and so on.
More importantly though, I'm concerned as to whether this will be guaranteed thread-safe under the ECMA or CLR memory models.
I'm fairly certain I'll need to insert a memory barrier before executing a queued piece of work, but I'm not totally confident in my reasoning here (or that this approach might be unworkable for entirely separate reasons).