0

I am trying to implement a Minimax (with alpha beta pruning. My Problem now is that if I evaluate a position and backtrack to the next move in the iteration (one level up) the "currentBoard" is not the initial board but the one from the evaluated leaf, even though makeMove and removeFigure both return a new board.

So how can I "save" the old board for correct backtracking?

P.s: I want to use copying instead of undoing a move because the board is a simple hashmap so i guess its easier this way.

Here is the code I have so far:

public int alphaBeta(Board currentBoard, int depth, int alpha, int beta, boolean maximisingPlayer) {
    int score;
    if (depth == 0) {
        return Evaluator.evaluateLeaf(whichColorAmI, currentBoard);
    }
    else if (maximisingPlayer) {
        ArrayList<Move> possibleMoves= new ArrayList<Move>();
        possibleMoves=getPossibleMoves(whichColorAmI, currentBoard);
        for (Move iterMoveForMe : possibleMoves) {
            if(currentBoard.figureAt(iterMoveForMe.to)!=null){
                currentBoard =  currentBoard.removeFigure(iterMoveForMe.to);
            }
            currentBoard= currentBoard.moveFigure(iterMoveForMe.from, iterMoveForMe.to);
            score = alphaBeta(currentBoard, depth-1, alpha, beta, false);
            if(score>=alpha){
                alpha=score;
                if(depth==initialDepth){
                    moveToMake=iterMoveForMe;
                }
            }
            if (alpha>=beta) {
                break;
            }
        }
        return alpha;
    }
    else {[Minimizer...]

}

solaire
  • 475
  • 5
  • 22
  • I had simmilar problems while writing an ai for chess. I am not sure if my solutuion is correct, but i created defensive copies of the element of the board. – Stimpson Cat May 10 '17 at 10:00

1 Answers1

0

I guess I found a way to do this. At least it seems to work. They key is to make a copy right after the for loop and use this copy later on instead of the currentBoard so the currentBoard for the loop gets never modified.

public int alphaBeta(Board currentBoard, int depth, int alpha, int beta, boolean maximisingPlayer) {
    Display dis = new ConsoleDisplay();

    int score;
    if (depth == 0) {
        int evaluatedScore = Evaluator.evaluateLeaf(whichColorAmI, currentBoard);
        return evaluatedScore;
    }
    else if (maximisingPlayer) {
        ArrayList<Move> possibleMoves= new ArrayList<Move>();
        possibleMoves=getPossibleMoves(whichColorAmI, currentBoard);
        for (Move iterMoveForMe : possibleMoves) {
            Board copy = new Board(currentBoard.height, currentBoard.width,currentBoard.figures());
            if(copy.figureAt(iterMoveForMe.to)!=null){
                copy =  currentBoard.removeFigure(iterMoveForMe.to);
            }
                copy= copy.moveFigure(iterMoveForMe.from, iterMoveForMe.to);
            score = alphaBeta(copy, depth-1, alpha, beta, false);

            if(score>=alpha){
                alpha=score;
                if(depth==maxDepth){
                    moveToMake=iterMoveForMe;
                }
            }
            if (alpha>=beta) {
                    break;
            }
        }
        return alpha;
    }
    else {
solaire
  • 475
  • 5
  • 22