There are various machine-learning algorithms a.k.a. “learners” for regression and classification problems. Ensembles allow us to create a more powerful learner from a set of base learners. They are known to produce better results than the individual algorithms and are better at reducing generalization errors. Thus, the base learners are also referred as weak learners when discussing the ensembles.
The base learning algorithms used by an ensemble could be of different types, for example the individual base learners could be Bayesian, Decision Trees, SVM, etc. in other cases, the base learners could be the same algorithm with different tuning parameters and training sets, for example in Random Forest ensemble, each base learner is a Decision Tree.
I’m going to discuss the five key ensemble techniques (Voting, Stacking, Bagging, Random Forest, and Boosting) and will attempt to represent them using a simple graphic.Read More