So I implemented standard Depth First Search for a tree of nodes, where each node encapsulates a state of a problem I'm solving and I also added the method below to check that I'm not going to repeat a move by expanding a node which encapsulates a state that I have already checked in some previous node. My question is: does this method somehow alter the time or space complexity of the algorithm or are they still the typical for DFS O(b^m) and O(bm), respectively (where here b - branching factor and m - maximum depth).
//additional method which prevents us from repeating moves
public boolean checkForRepeatedMoves(Node node) {
Node nodeBeingChecked = node;
//if the current node's state is the same as the state of its parent or its parent's parent and so on.. then we are repeating a move
while (node.getParent() != null) {
if(node.getParent().getState().isEqualTo(nodeBeingChecked.getState())) {
return true;
}
node = node.getParent();
}
//if we have reached the root and no repetition is detected - return false
return false;
}