2

In an ideal async program, every event loop is always occupied with zero downtime between receiving data and polling->action-execution.

My program listens on an array of ports, and the polling and movement of data into a queue occurs on a single async-core (A). I then have another async-core (B) which takes the data from that queue and processes it. I then have another async core which runs background subroutines (C). All A, B, and C take place on different threads.

Let's suppose there is a massive load of data streaming, and core B becomes overloaded with pending work (this would effectively mean "lag" for the end-user). What are the common ways to detect this overload, and should an overload be detected, should I use another async-core(D) joined with B?

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366
Thomas Braun
  • 505
  • 3
  • 14
  • 1
    Some simple ways to detect overload might be recording how long items have been in the queue, meaning they aren't being processed, or, you could put a limit on the number items that are allowed in a queue(which I don't like as much). It seems by the architecture you describe above you are going to need more "Workers" when core B becomes overloaded. So it would ideal to be able to spin up more workers similar to B when you've detected an overload and bring them down when you are under less load. I don't know the Rust specifics but that is how I would generally approach that problem. Cheers. – Sam Orozco Apr 09 '19 at 23:47
  • 1
    Nice, yeah I could just wrap the packets with a timestamp, and for each packet that gets processed, check the delta-t values. If delta-t is increasing, then the number of cores can increase. Thanks Sam – Thomas Braun Apr 10 '19 at 00:59
  • 2
    I think this question is too broad. There may be many ways to detect overload/lag, not all of which apply in any given situation, and the second question is essentially unanswerable -- whether adding another core makes your program faster or slower depends not just on the workload but on how the program is designed to parallelize it, and even things like cache line size and threading model. If you're trying to optimize a real program, the best way to know is to measure it; if you're just looking for general tips/rules of thumb, I'd suggest asking on users.rust-lang.org instead. – trent Apr 10 '19 at 13:53
  • @trentcl indeed. I knew it was, but my confusion of the subject at the time of posting was enveloped by the entire text. Sometimes the underlying concept implied by the entire idea is more important when asking a question than a very specific question. – Thomas Braun Apr 11 '19 at 12:11

0 Answers0