Questions tagged [boosting]

Boosting is a machine learning ensemble meta-algorithm in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones. Also: Boosting is the process of enhancing the relevancy of a document or field

From [the docs]:

"Boosting" is a machine learning ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones.

Also:

From the docs:

Boosting is the process of enhancing the relevancy of a document or field. Field level mapping allows to define an explicit boost level on a specific field. The boost field mapping (applied on the root object) allows to define a boost field mapping where its content will control the boost level of the document.

181 questions
3
votes
1 answer

AdaBoost repeatedly chooses same weak learners

I have implemented a version of the AdaBoost boosting algorithm, where I use decision stumps as weak learners. However often I find that after training the AdaBoost algorithm, a series of weak learners is created, such that this series is recurring…
H_Lev1
  • 253
  • 4
  • 18
3
votes
2 answers

how can I print variable importance in gbm function?

I used the gbm function to implement gradient boosting. And I want to perform classification. After that, I used the varImp() function to print variable importance in gradient boosting modeling. But... only 4 variables have non-zero importance.…
이순우
  • 79
  • 1
  • 1
  • 10
3
votes
1 answer

Accord.NET: how to train Boost classifier

I'm trying to use Accord.NET library for objects classification, but I failed to find any suitable examples and documentation is not enough to understand the process. My current code is Predictor = new Boost(); AdaBoost
J. Bond
  • 31
  • 3
2
votes
1 answer

boosting algorithm - classifiers yields the correct label

I am a computer science student; I am studying the Algorithms course independently. During the course, I saw this question: Suppose we have a set X = {x1, . . . , xn} of elements, each with a label L(i) ∈ {0, 1} (think of x(i) as a picture, and the…
hch
  • 717
  • 1
  • 6
  • 18
2
votes
1 answer

PicklingError: Could not pickle the task to send it to the workers. while using gridsearch for lgbm

I'm trying to find best parameter in LGBM using GridSearch and here's my approach. from sklearn.model_selection import GridSearchCV import lightgbm as lgb model = lgb.LGBMRegressor(random_state=1, objective='regression') param_grid = { …
Chris_007
  • 829
  • 11
  • 29
2
votes
0 answers

xgboost regression predictions

I have a logistic regression xgboost model trained with the following hyperparameters (obtained with a grid search) in Python: Hyperparams selected {'gamma': 0, 'learning_rate': 0.1, 'max_depth': 3, 'min_child_weight': 1, 'n_estimators': 125} This…
2
votes
1 answer

Custom random_sampling for sklearn ensembles

I need to write a custom random_selection (for random selection of feature i.e "max_feature" and subset of train data i.e. "subsample") module in scikit-learn to be used with sklearn.ensemble.RandomForestClassifier and GradientBoostingClassifier.…
dgomzi
  • 106
  • 1
  • 14
2
votes
1 answer

Implement a custom loss function in Tensorflow BoostedTreesEstimator

I'm trying to implement a boosting model using Tensorflow "BoostedTreesRegressor". For that, I need to implement a custom loss function where during training, the loss will be calculated according to the logic defined in my custom function rather…
2
votes
1 answer

How does Lightgbm (or other boosted trees implementations with 2nd order approximations of the loss) work for L1 losses?

I've been trying to understand how Lightgbm handless L1 loses (MAE, MAPE, HUBER) According to this article, the gain during a split should depend only on the first and second derivatives of the loss function. This is due to the fact that Lightgbm…
2
votes
1 answer

How can i correctly set a decaying learning rate callback passing it a custom function in xgboost?

I have this function to set up a descending learning rate: def learning_rate_005_decay_power_099(current_iter): base_learning_rate = 0.05 lr = base_learning_rate * np.power(.99, current_iter) return lr if lr > 1e-3 else 1e-3 Now i want…
Miguel 2488
  • 1,410
  • 1
  • 20
  • 41
2
votes
1 answer

What evaluation metric to use for LightGBM ranker function

I'm using LGMRanker from LightGBM but not sure what evaluation metric I should be using. Here is my code: import lightgbm as lgb gbm = lgb.LGBMRanker gridParams = { 'learning_rate': [0.005,0.01,0.02], 'max_depth': [5,6,7], 'n_estimators':…
HHH
  • 6,085
  • 20
  • 92
  • 164
2
votes
1 answer

Time series forecasting with gradint boosting in R

I have question that refers to both xgbar(forecastxgb package) and forecast function (forecast package). Usually, when I use object of class forecast I get both point prediction as well as confidence intervals, but not in this case: model <-…
M_D
  • 287
  • 3
  • 13
2
votes
1 answer

Caret using C5.0 method, how to plot the final tree

I am using the train package method=C5.0 and would like to see the finalModel plotted as a tree. The resulting tree has been defined : The final values used for the model were trials = 15, model = tree and winnow = FALSE. When I tried to plot the…
E B
  • 1,073
  • 3
  • 23
  • 36
2
votes
1 answer

How to know when to use Solr bq vs. bf and how to apply query logic?

I'm just starting to learn about boosting in Solr, and so far I've been able to add boost queries based on some specific phrases like: bq=manufacturer:sony^2. However, I'm now looking to apply logic to a boost and I'm not sure how to…
muZero
  • 948
  • 9
  • 22
2
votes
2 answers

Boosting classification tree in R

I'm trying to boost a classification tree using the gbm package in R and I'm a little bit confused about the kind of predictions I obtain from the predict function. Here is my code: #Load packages, set random seed library(gbm) set.seed(1) …
Armin
  • 23
  • 1
  • 1
  • 3
1 2
3
11 12