3

I'm trying to use boost::beast to implement a web service providing some REST APIs. These APIs are CPU-heavy, almost no disk or db I/O. My goal is to optimize for latency with OK throughput. Should I use sync or async way to implement them?

Thanks!

NonStatic
  • 951
  • 1
  • 8
  • 27
  • How you gonna benefit from async in case you are not going to sleep on I/O? just open workers as number of CPUs and run synchronously, IMO – kreuzerkrieg Feb 27 '19 at 05:23
  • I really depends on your case case and code. Async does not prohibit concurrency, since you would just have multiple thread running the `boost::asio::io_context` so your handlers could run concurrently. Strands are nice a sychronization method, and following this approach you are using fast user land threads. But you could also just run boost-beast on one thread, and use one of many concurrency patterns. But there is no simple answer, I recommend to "C++ concurrency in action" by Anthony A. Williams. But I bet you would be fine by just using `boost::asio::dispatch`, maybe on a diff. io_context – Superlokkus Feb 27 '19 at 12:19

3 Answers3

2

If you want timeouts, you have no choice but to use the asynchronous APIs provided by Boost.Beast / Boost.Asio / Asio / Networking TS.

Vinnie Falco
  • 5,173
  • 28
  • 43
  • 1
    That's not strictly true. You could conceivably compose an asynchronous operation with a timer (which creates the timeout) so that it blocks by calling io_context.run() the operation completes. This way you have a synchronous API with a timeout. – Martijn Otto Feb 28 '19 at 15:49
  • When I say "asynchronous API" I mean the interfaces provided by Networking TS / Boost.Asio / Asio. I have updated the answer, thanks! – Vinnie Falco Feb 28 '19 at 17:11
1

You can test your way of this and see what works best in your use case. Then use concurrent design patterns to optimize if the performance is not good enough.

I guess you should a concrete measure on what you mean by "OK throughput" and then benchmark that in your system.

Damian
  • 4,395
  • 4
  • 39
  • 67
1

Generally speaking, when you are doing I/O-intensive work with little to no CPU-overhead, non-blocking or async is best. When the operations are, however, CPU-intensive, a threaded model tends to make more sense.

The reason for this is simple: It's usually a bad idea to block the event loop for longer periods of time - as would happen when using an asynchronous model for CPU-heavy computations.

When you start blocking the event loop, things like timers don't behave like they should, since they can only trigger once you return control to the event loop. This is usually not what you want.

Martijn Otto
  • 878
  • 4
  • 21