8

Conceptually just having one queue for jobs seems to be sufficient for most use-cases.
What are the reasons for having multiple queues and distinguishing those into "microtasks" and (macro)"tasks"?

Kaiido
  • 123,334
  • 13
  • 219
  • 285
james
  • 93
  • 5
  • 1
    See [Is asking “why” on language specifications still considered as “primarily opinion-based” if it can have official answers?](https://meta.stackoverflow.com/questions/323334/is-asking-why-on-language-specifications-still-considered-as-primarily-opinio) – Ivar Jul 26 '21 at 07:37
  • 1
    @VLAZ But the current design clearly seems overly complex. There must be an advantage to choosing that design over a simple one with a singe queue (or one with multiple queues on the same level, without the nesting). – Bergi Jul 26 '21 at 07:56
  • 6
    Not sure this is really a question of opinion... One obvious reason for the need of microtask is task-prioritization. We want some tasks to get delayed to after the whole js job ends, but before the next "task" happens. Example (and where the microtasks queue was first implemented in HTML): MutationObserver callbacks, All the mutations that did happen in the same JS job need to be merged into a single event, which needs to fire before the next task, so that e.g we avoid being one rendering frame late. – Kaiido Jul 26 '21 at 10:37
  • 2
    @Kaiido Can you expand that into an answer, please? Especially the historical angle would be interesting – Bergi Jul 26 '21 at 11:56
  • 3
    After digging in the W3C mailing list (note that I didn't participate in specs matters at all in ~2013) I found out that an other big motivation for having a microtask queue (actually they were even talking about having multiple ones) was the now deprecated `Object.observe` which similarly to MutationObserver did "coalesce" various events into a single callback. They were also already envisioning the upcoming ES Promises, ES WeakRefs (not sure why), and HTML Custom Elements callbacks, which would all need that microtask queue. And note that there are also multiple (macro)task-sources. – Kaiido Jul 27 '21 at 05:12

1 Answers1

7

Having multiple (macro) task queues allows for task prioritization.

For instance, an User Agent (UA) can choose to prioritize a user event (e.g a click event) over a network event, even if the latter was actually registered by the system first, because the former is probably more visible to the user and requires lower latency.
In HTML this is allowed by the first step of the Event Loop's Processing model, which states that the UA must choose the next task to execute from one of its task queues.
(note: HTML specs do not require that there are multiple queues, but they do define multiple task sources to guarantee the execution order of similar tasks).

Now, why have yet an other beast called microtask queue?

We can find a few early discussions about "how" it should be implemented, but I didn't dig as far as to find who proposed this idea the first and for what use case.

However from the discussions I found, we can see that a few proposals needing such a mechanism were on the road:

  • Mutation Observers
  • Now-deprecated ES Object.observe()
  • At-that-time-incoming ES Promises,
  • Also-incoming-at-that-time-and-I-don't-know-why-it's-cited ES WeakRefs
  • HTML Custom Elements callback

The first two being also the first to be implemented in browsers, we can probably say that their use case was the main reason for implementing this new kind of queue.

Both actually did similar things: they listened for changes on an object (or the DOM tree), and coalesced all the changes that occurred during a job into a single event (not to be read as Event).
One could argue that this event could have been queued in the next (macro) task, with the highest priority, except that the Event Loop was already a bit complex, and every jobs did not necessarily come from tasks.

For instance, the rendering is actually a part of every Event Loop iteration, except that most of the time it early exits because it wasn't the time to render.
So if you do your DOM modifications during a rendering frame, you could have the modifications rendered, and only after the whole rendering took place you'd get the callback.
Since the main use case for observers is to act on the observed changes before they trigger performance heavy side-effects, I guess you can see how it was necessary to have a mean to insert our callback right after the job which has done the modifications.


PS: At that time I didn't know much about the Event Loop, I was far from specs matters and I may have put some anachronisms in this answer.

Kaiido
  • 123,334
  • 13
  • 219
  • 285