-1

My understanding of gradient boosting is this...

We can make the model much more complex by creating lots of decision trees sequentially. Each decision trees build on each other. The goal of each new tree is to fix the errors where the previous trees are the most wrong. If we had 3,000 decision trees, this means that the errors are minimized 3,000 times. By the end we would have reduced the errors.

Are there any fault in my understanding?

asilvester635
  • 81
  • 2
  • 13
  • 2
    Since this doesn't seem to be about a specific programming problem or language, I suggest you ask that over at https://stats.stackexchange.com – Sentry Apr 29 '18 at 21:46

1 Answers1

1

no, your understanding is true. because gradient boosting employs the logic in which the subsequent predictors learn from the mistakes of the previous predictors. Therefore, the observations have an unequal probability of appearing in subsequent models and ones with the highest error appear most. The predictors can be chosen from a range of models like decision trees Because new predictors are learning from mistakes committed by previous predictors, it takes less time/iterations to reach close to actual predictions.