In an ideal async program, every event loop is always occupied with zero downtime between receiving data and polling->action-execution.
My program listens on an array of ports, and the polling and movement of data into a queue occurs on a single async-core (A). I then have another async-core (B) which takes the data from that queue and processes it. I then have another async core which runs background subroutines (C). All A, B, and C take place on different threads.
Let's suppose there is a massive load of data streaming, and core B becomes overloaded with pending work (this would effectively mean "lag" for the end-user). What are the common ways to detect this overload, and should an overload be detected, should I use another async-core(D) joined with B?