0

Currently I'm averaging predictions using the same model, I re-initialize the model and fit to a different test set of data from the same data sample, then I average over all of the prediction accuracies

Seems there are multiple possible ways to do this.

  1. Average together the predictions such as in this question.

  2. Average all the final model weights together such as here.

  3. I could find an average ensamble (but using the same model for all of the input models), or go a step further and make it a weighted average ensamble.

  4. I could stack ensambles to create a model that learns which models are the best predictors.

While 1 and 2 deal with the same type of model e.g. keras models, in 3 and 4 I can use multiple different models. But is it a good approach to use 3 and 4 instead of 1 and 2 by just making all the models in the ensamble the same (though acting on different training sets). Furthermore, the ensamble approach allows for different types of models. It seems that 3 and 4 could be used instead of 1 or 2 as they are more general? That is, for example, using 3, finding a weighted average of N copies of the same model? If so, would stacking ensambles (in 4) be better than just weighting them (in 3), that is, creating a higher level model to learn which of the lower level models make better predictions?

Relative0
  • 1,567
  • 4
  • 17
  • 23

0 Answers0