1

Is there a way to parallelize the following negamax algorithm?

01 function negamax(node, depth, color)
02     if depth = 0 or node is a terminal node
03         return color * the heuristic value of node
04     bestValue := −∞
05     foreach child of node
06         v := −negamax(child, depth − 1, −color)
07         bestValue := max( bestValue, v )
08     return bestValue
user3666197
  • 1
  • 6
  • 50
  • 92
Pietro Gerace
  • 45
  • 1
  • 10

2 Answers2

0

When formulated like that, as a single recursive function, not easily.

The easiest way would be to split it up into two functions: one specifically for your very first call at the root of the tree, and then a different function that is called and also recursively continues calling itself. In the root version, you could then parallelize the loop through children, and store the different values in a list. Then you'll just need a non-parallelized loop afterwards to find the max out of the list, but that'll be done instantly.

Note that, if you move on to enhancements like alpha-beta pruning, this will become much more complicated; naively parallelizing the first loop like I suggested here will significantly reduce the number of nodes that can be pruned by alpha-beta pruning

Dennis Soemers
  • 8,090
  • 2
  • 32
  • 55
0

Topology + pure-[SERIAL] value-dependency chain decides a game:

Given the tree-topology, the use of a recursive formulation is not a cause, but rather a resulting causal-effect, as the recursion formulation is so simple to sketch and code. ( For those indeed interested in Computer Science, just benchmark the resources consumption of recursively-formulated computing strategies, and test ( in both the [SPACE]- and [TIME]-domains ) how soon it starts to create troubles, once your benchmark scaling grows a bit outside the school-book example problem scales and/or beyond designers' choice in this very resources-limiting dilemma on how deep recursion levels are fair to expect and set resources for and what happens next, if recursions try to dive a step deeper, than this "pre-wired" glass ceiling did happen ).


How to convert this into a true-[PARALLEL] computing schedule?

A question about chances to find a true-[PARALLEL] processing strategy gets decided straight on finding a non-parallelisable pure-[SERIAL] dependency chains, which start from each of the terminal nodes and progress backwards, towards the tree root-node.

Tree topologies require some sort of "communication" of mutual-dependencies, that appear during the reversed traversing of the tree, so all the principal advantages a true-[PARALLEL] processing schedule may allow are efficiently lost in such a need to transfer products of pure-[SERIAL] value-dependency chains to the "next", non-local processing flow-of-code execution. This makes the original idea an anti-pattern.

Having realised this principally pure-[SERIAL] dependency, any wannabe-parallelisation enforcement attempts remain possible, but performance results thereof become more an anti-pattern, than any reasonably supported choice, as they will actually cost more ( in [TIME]-domain ), than a serial-chain processing ( being formulated recursively or not ), given the [SPACE]-domain permits such modus operandi.

user3666197
  • 1
  • 6
  • 50
  • 92