0

I am writing a basic chess AI using a minimax algorithm. I implemented alpha-beta pruning which seemed to work fine. Here's the code:

def move(self, board):
    moves = {}

    for move in board.legal_moves:
        board.push(move)
        moves[move] = self.evaluate_move(board, 1, float("-inf"), float("inf"))
        board.pop()

    best_moves = []
    for key in moves.keys():
        if moves[key] == max(moves.values()):
            best_moves.append(key)
    
    chosen_move = random.choice(best_moves)
    return chosen_move

def evaluate_move(self, board, depth, alpha, beta):
    if depth % 2: # if depth is odd ie. minimizing player
        extremepoints = float("inf")
    else:
        extremepoints = float("-inf")

    if depth < self.depth_limit and (not board.is_game_over()):
        for move in board.legal_moves:
                board.push(move)
                if depth % 2: # if depth is odd ie. minimizing player
                    points = self.evaluate_move(board, depth+1, alpha, beta)
                    extremepoints = min(extremepoints, points)
                    beta = min(beta, points)
                    if alpha >= beta:
                        board.pop()
                        break
                else:
                    points = self.evaluate_move(board, depth+1, alpha, beta)
                    extremepoints = max(extremepoints, points)
                    alpha = max(alpha, points)
                    if beta <= alpha:
                        board.pop()
                        break
                board.pop()
    else:
        return self.evaluate_position(board)

    return extremepoints

However, while watching this video I realized, that I might be losing out on potential performance. At that point in the video, alpha gets set at the very top of the tree and it's being given to all other first-level moves. My implementation doesn't do this and instead gives each first-level move the value -inf for alpha. I tried to fix this by doing:

def move(self, board):
    alpha = float("-inf")
    beta = float("inf")
    moves = {}

    for move in board.legal_moves:
        board.push(move)
        moves[move] = self.evaluate_move(board, 1, alpha, beta) # Change here
        alpha = max(alpha, moves[move])
        board.pop()

    best_moves = []
    for key in moves.keys():
        if moves[key] == max(moves.values()):
            best_moves.append(key)
    
    chosen_move = random.choice(best_moves)
    return chosen_move

The problem is, this resulted in a worse AI. It's way faster, but loses every time to an AI without this "fix". However while browsing Stack Overflow I found the link to this implementation, that seems to be doing it the same way I am.

So, my question is: Am I already doing alpha-beta pruning to the furthest extent possible and no changes are needed, or, is there something wrong with the way I'm implementing the fix?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
erhuht
  • 1
  • Alpha-beta pruning ignores all moves which are too good (or too bad, depending on how you look at it). You guess how good the result is to avoid looking at all other options. But you have to check whether your guess was right. If it was not, you have to retry with different bounds. Otherwise you end up ignoring the best move. – zvone Dec 21 '20 at 15:59
  • And BTW, this is not AI. It is just an algorithm. I would not call it AI if it does not have at least some level of machine learning. – zvone Dec 21 '20 at 16:02
  • Thanks for the comment :) I'm still unsure, because the video (among other sources) has pruning where my algorithm wouldn't have, because the alpha-values get carried over. Also, I'm pretty sure AI and machine learning are two distinctly different things, that just are used interchangeably these days. Even the Wikipedia page for Minimax has the words AI in the first sentence. Anyway, thanks, I'll have a think about it. – erhuht Dec 21 '20 at 16:18
  • You may be right. Both my comments were more about my "feeling" than the actual facts. It's been a while since I did something like this ;) – zvone Dec 21 '20 at 16:20

0 Answers0