I have been using the scikit-learn RandomForestClassifier for a project. I am wondering if there's any tool or trick to grow a "full" forest, and then experiment with certain hyperparameters ex-post. For example, is there a way to quickly chop every fully grown tree to a depth of, say, 10 to check its performance on a test set? I imagine that this could be done in a computationally feasible manner for any hyperparameter that limits tree depth (e.g. max_depth, min_sample_leaf, min_sample_split).
I currently use GridSearchCV to find the best configuration of max_depth, max_features and max_samples, but with three hyperparameters, it takes a long time to search the space.