During hyperparameter optimization of a boosted trees algorithm such as xgboost or lightgbm, is it possible to directly control the minimum (not just the maximum) number of boosting rounds (estimators/trees) when using early stopping? This need is motivated by the observation that models that stop training after too few rounds are consistently underfitted (have metrics significantly worse than state-of-the-art models that tend to have more boosting rounds).
The only solution I know is an indirect one: to adjust a linked hyperparameter - learning rate (reducing its upper limit in the search space). When set too high, learning rate can lead to underfitted models and thus for training to stop too quickly, i.e. with too few boosting rounds.