2

By working worse, I mean even a higher training error.

# Boosted SVC
clf = AdaBoostClassifier(base_estimator=SVC(random_state=1), random_state=1, algorithm="SAMME", n_estimators=5)
clf.fit(X, y)

# Only SVC
clf = SVC()
clf.fit(X, y)

My training data is

enter image description here

The result of boosted SVM: enter image description here

And the result of SVM:

enter image description here

beaver
  • 550
  • 1
  • 9
  • 23
  • You should ask this on https://stats.stackexchange.com. Also, i think that happens because boosting requires more variable (More possibilities to separate data) algorithms than linear, can you try non-linear svm? – Ibraim Ganiev Sep 20 '15 at 08:23
  • Also, it's not implementation problem - it's some theoretical problem, i'm also intrested in getting answer for this question. I did same thing with AdaBoost self-made implementation , and it didn't worked properly too. Only with decision trees i achieved normal results. – Ibraim Ganiev Sep 20 '15 at 08:29
  • @Olologin The default kernel for ``sklearn.svm.SVC`` is ``rbf``, which is non-linear. – beaver Sep 21 '15 at 03:02

1 Answers1

1

The main concept of adaBoost is to combine weak learners, thats why the default classifier is a decision stump. So by using SVM (strong classifier) as a weak one, you are loosing the concept of ensemble learning and you are getting worst results.