I'm rather new to machine learning or even programming itself, so I'm sorry if questions that I'm about to ask don't make much sense. So I've been using 5 different, and not so weak classifiers (5 neural networks with error rate around 0.25-0.3), and I got stuck at implementing AdaBoost to those classifiers. What I don't understand is do I have to train all 5 classifiers over and over again in every single iteration and then calculate error, and if so, how I use fact that I should train 'harder' on some examples (with higher weight)?
Asked
Active
Viewed 696 times
2
-
Yes, you have to choose the weak classifier which gives the minimum error at each iteration. So the answer to your question is: Yes, you have to train all 5 classifiers over and over again in every single iteration and then calculate error. The next question regarding "weight" has a detailed answer, in summary, when you change weights, some examples are given more "attention" by classifier. You can read Adaboost literature for formal proofs. Read the Table 1 in [this](http://www.vision.caltech.edu/html-files/EE148-2005-Spring/pprs/viola04ijcv.pdf) doc, it clears implementation related doubts. – Autonomous Jul 15 '14 at 18:38
-
Study `adaboostTrain` and `adaboostApply` on [this](http://vision.ucsd.edu/~pdollar/toolbox/doc/) page. It will help you in understanding. They use tree as a weak classifier, but you can replace it with anything. – Autonomous Jul 15 '14 at 18:40
-
What I can't seem to understand is how I'm supposed to give them more "attention"? – Milan Stefanovic Jul 15 '14 at 18:43
-
How you are supposed to give them more attention: You don't have to give, if you follow the algorithm, then the misclassified examples in the previous iteration will get more attention in the next iteration automatically. The correct question to ask here will be: how do they get more "attention" -> answer is in those proofs. I will summarize them when I get time. – Autonomous Jul 15 '14 at 20:25
-
Actually what I don't get is how do I train my NN cost function with respect to distribution ? Do I just multiply each training example with its weight ? – Milan Stefanovic Jul 17 '14 at 14:40
-
Sorry, post a different question for this. I am not an expert in NN. – Autonomous Jul 17 '14 at 17:23