I'm exploring a tree of moves for a match-3 type game (with horizontal piece swapping), using a DFS to a depth of 4 moves ahead (because working towards bigger clear combos is much higher scoring). I'm evaluating each possible move of each board to look for the best overall next move to maximise score.
I'm representing the game board as a vector<char>
with 100 chars per board, however my current DFS implementation doesn't check if a given board state has already been evluated (there's multiple move sequences that could lead to a given board state).
Given that each board has 90 possible moves, a DFS search to depth 4 evaluates 65,610,000
board states (in 100ms), and therefore I'd assume it's impractical to store each board state for the sake of DFS pruning to avoid re-evaluating a given board state. However, I recognise I could significantly prune the search tree if I avoid re-evaluation of previously evaluated states.
Is there an efficient and or memory conserving method to avoid re-evaluating previously evaluated board states in this DFS?