Topology + pure-[SERIAL]
value-dependency chain decides a game:
Given the tree-topology, the use of a recursive formulation is not a cause, but rather a resulting causal-effect, as the recursion formulation is so simple to sketch and code. ( For those indeed interested in Computer Science, just benchmark the resources consumption of recursively-formulated computing strategies, and test ( in both the [SPACE]
- and [TIME]
-domains ) how soon it starts to create troubles, once your benchmark scaling grows a bit outside the school-book example problem scales and/or beyond designers' choice in this very resources-limiting dilemma on how deep recursion levels are fair to expect and set resources for and what happens next, if recursions try to dive a step deeper, than this "pre-wired" glass ceiling did happen ).
How to convert this into a true-[PARALLEL]
computing schedule?
A question about chances to find a true-[PARALLEL]
processing strategy gets decided straight on finding a non-parallelisable pure-[SERIAL]
dependency chains, which start from each of the terminal nodes and progress backwards, towards the tree root-node.
Tree topologies require some sort of "communication" of mutual-dependencies, that appear during the reversed traversing of the tree, so all the principal advantages a true-[PARALLEL]
processing schedule may allow are efficiently lost in such a need to transfer products of pure-[SERIAL]
value-dependency chains to the "next", non-local processing flow-of-code execution. This makes the original idea an anti-pattern.
Having realised this principally pure-[SERIAL]
dependency, any wannabe-parallelisation enforcement attempts remain possible, but performance results thereof become more an anti-pattern, than any reasonably supported choice, as they will actually cost more ( in [TIME]
-domain ), than a serial-chain processing ( being formulated recursively or not ), given the [SPACE]
-domain permits such modus operandi.