0

The Min-Max search, as well as the more efficient Alpha-Beta search algorithm are well known and often used to implement an artificial intelligence (AI) player in games like tic-tac-toe, connect 4 and so on.

While AIs based on these search algorithms are basically unbeatable for humans if they can traverse the whole search tree, this gets infeasible when there are too many possiblities due to exponential growth (like in Go for example).

All those games mentioned so far are turn-based.

However, if we assume to have enough computational power, shouldn't it be possible to also apply these algorithms to real-time strategy (RTS) games? In theory, this should work by discretizing time into small enough frames and then simulating all possible actions at each time stamp.

Clearly, the search tree would quickly explode in size. However, I wonder if there exist any theoretical analyzes of such an approach for real time games? Or maybe even practical investigations which use a very reduced and simplified RTS?

Question: I am searching for references (if there exist any) on this topic.

SampleTime
  • 291
  • 3
  • 19
  • Stack Overflow is not meant to be used for discussions. This is not a forum, but a Q&A site where scope of the questions is narrow. – Dialecticus Jun 09 '19 at 14:06
  • @Dialecticus I don't want to discuss, this is a reference request. – SampleTime Jun 09 '19 at 14:07
  • Welcome to StackOverflow. Please follow the posting guidelines in the help documentation, as suggested when you created this account. [On topic](https://stackoverflow.com/help/on-topic), [how to ask](https://stackoverflow.com/help/how-to-ask), and ... [the perfect question](https://codeblog.jonskeet.uk/2010/08/29/writing-the-perfect-question/) apply here. StackOverflow is not a design, coding, research, or tutorial resource. A reference request is specifically off-topic. – Prune Jun 10 '19 at 23:54

1 Answers1

0

This paper "Search in Real-Time Video Games", Cowling et al, 1998, asserts that A* was used widely in video game search.

There's Geisler's MS thesis 'An Empirical Study of Machine Learning Algorithms Applied to Model Player Behavior in a "First Person Shooter" Video Game' where he primarily uses ID3 and the boost algorithm to learn the behaviors of an expert Soldier of Fortune 2 FPS player and incorporate that in an agent playing the game.

There are several other similar papers online but it appears that most of them currently are using various machine learning algorithms to learn behaviors by observation and incorporate those into some kind of agent rather than primarily use optimized search.

"Learning Human Behavior from Observation for Gaming Applications", Moriarty and Gonzales, 2009, is an example for this.

Kirt Undercoffer
  • 591
  • 4
  • 11