I am trying to implement a Minimax (with alpha beta pruning. My Problem now is that if I evaluate a position and backtrack to the next move in the iteration (one level up) the "currentBoard" is not the initial board but the one from the evaluated leaf, even though makeMove and removeFigure both return a new board.
So how can I "save" the old board for correct backtracking?
P.s: I want to use copying instead of undoing a move because the board is a simple hashmap so i guess its easier this way.
Here is the code I have so far:
public int alphaBeta(Board currentBoard, int depth, int alpha, int beta, boolean maximisingPlayer) {
int score;
if (depth == 0) {
return Evaluator.evaluateLeaf(whichColorAmI, currentBoard);
}
else if (maximisingPlayer) {
ArrayList<Move> possibleMoves= new ArrayList<Move>();
possibleMoves=getPossibleMoves(whichColorAmI, currentBoard);
for (Move iterMoveForMe : possibleMoves) {
if(currentBoard.figureAt(iterMoveForMe.to)!=null){
currentBoard = currentBoard.removeFigure(iterMoveForMe.to);
}
currentBoard= currentBoard.moveFigure(iterMoveForMe.from, iterMoveForMe.to);
score = alphaBeta(currentBoard, depth-1, alpha, beta, false);
if(score>=alpha){
alpha=score;
if(depth==initialDepth){
moveToMake=iterMoveForMe;
}
}
if (alpha>=beta) {
break;
}
}
return alpha;
}
else {[Minimizer...]
}