The concept of Ensemble training refers to the use of various distinct classifiers, combined in a certain manner to maximize performance by leveraging the strengths of each while mitigating the weaknesses of the individual classifiers.
At the core of the concept of Ensemble Learning are weak classifiers: a weak classifier is capable of classifying at least the of the samples in a binary problem. When combined in a certain way, weak classifiers allow for the construction of a strong classifier, simultaneously addressing typical issues associated with traditional classifiers, with overfitting being the primary concern.
The origin of Ensemble Learning, the concept of a weak classifier, and in particular the notion of probably approximately correct learning (PAC) were first introduced by Valiant (Val84).
In fact, Ensemble Learning techniques do not provide general purpose classifiers; rather, they indicate the optimal way to combine multiple classifiers together.
Examples of Ensemble Learning techniques include
Examples of weak classifiers widely used in the literature include Decision Stumps (AL92) associated with Haar features (section 6.1). The Decision Stump is a binary classifier in the form