The gbm
package in R has a function gbm.perf
to find the optimum number of trees for the model using different methods like "Out-of-Bag" or "Cross-Validation" error, which helps to avoid over-fitting.
Does Gradientboosting inScikit learn
library in python also have a similar function to find the optimum number of trees using the "out of bag" method ?
#r code
mod1 = gbm(var~.,data=dat, interaction.depth = 3)
best.iter = gbm.perf(mod1,method="OOB")
scores = mean(predict(mod1,x,best.iter))
#python code
modl = GradientBoostingRegressor(max_depth= 3)
modl.fit(x,y)
scores = np.mean(modl.predict(dat))