# Experiments with a New Boosting Algorithm

@inproceedings{Freund1996ExperimentsWA, title={Experiments with a New Boosting Algorithm}, author={Yoav Freund and Robert E. Schapire}, booktitle={ICML}, year={1996} }

In an earlier paper, we introduced a new "boosting" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that con- sistently generates classifiers whose performance is a little better than random guessing. [...] Key Result In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. Expand

#### 8,405 Citations

A New Boosting Algorithm Using Input-Dependent Regularizer

- Computer Science
- ICML 2003
- 2003

Empirical studies on eight dierent UCI data sets and one text categorization data set show that WeightBoost almost always achieves a considerably better classification accuracy than AdaBoost, and experiments on data with artificially controlled noise indicate that the WeightBoost is more robust to noise than Ada boost. Expand

An efficient modified boosting method for solving classification problems

- Mathematics
- 2008

Based on the Adaboost algorithm, a modified boosting method is proposed in this paper for solving classification problems. This method predicts the class label of an example as the weighted majority… Expand

Improved Boosting Algorithms Using Confidence-rated Predictions

- Computer Science
- COLT' 98
- 1998

We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a… Expand

Boosting Neural Networks

- Computer Science, Medicine
- Neural Computation
- 2000

It is suggested that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost, and training methods based on sampling the training set and weighting the cost function are compared. Expand

Improved Boosting Algorithms using Confidence-Rated Predictions

- Mathematics, Computer Science
- COLT
- 1998

We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a… Expand

Stopping Criterion for Boosting-Based Data Reduction Techniques: from Binary to Multiclass Problem

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2002

The aim of the present paper is to relax the class constraint, and extend the contribution to multiclass problems, showing the benefits that the boosting-derived weighting rule brings to weighted nearest neighbor classifiers. Expand

Supervised projection approach for boosting classifiers

- Mathematics, Computer Science
- Pattern Recognit.
- 2009

A new approach for boosting methods for the construction of ensembles of classifiers, based on using the distribution given by the weighting scheme of boosting to construct a non-linear supervised projection of the original variables, instead of using the weights of the instances to train the next classifier. Expand

Quadratic boosting

- Computer Science
- Pattern Recognit.
- 2008

The quadratic boosting algorithm converges under the condition that the given base learner minimizes the empirical error and is shown to compare favorably with AdaBoost on large data sets at the cost of training speed. Expand

An Empirical Boosting Scheme for ROC-Based Genetic Programming Classifiers

- Computer Science
- EuroGP
- 2007

A geometrical interpretation of the ROC curve to attribute an error measure to every training case is proposed and proposed to compare boosted Genetic Programming performance with published results on ROC-based Evolution Strategies and Support Vector Machines. Expand

Training Methods for Adaptive Boosting of Neural Networks

- Computer Science
- NIPS
- 1997

This paper uses AdaBoost to improve the performances of neural networks and compares training methods based on sampling the training set and weighting the cost function. Expand

#### References

SHOWING 1-10 OF 34 REFERENCES

A decision-theoretic generalization of on-line learning and an application to boosting

- Computer Science
- EuroCOLT
- 1995

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand

Improving Performance in Neural Networks Using a Boosting Algorithm

- Computer Science
- NIPS
- 1992

The effect of boosting is reported on four databases consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service and the following from the National Institute of Standards and Testing (NIST). Expand

A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting

- Computer Science, Mathematics
- COLT 1997
- 1997

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand

Applying the Waek Learning Framework to Understand and Improve C4.5

- Computer Science
- ICML
- 1996

This paper performs experiments suggested by the formal results for Adaboost and C4:5 within the weak learning framework, and argues through experimental results that the theory must be understood in terms of a measure of a boosting algorithm's behavior called its advantage sequence. Expand

On the boosting ability of top-down decision tree learning algorithms

- Mathematics, Computer Science
- STOC '96
- 1996

This work analyzes the performance of top-down algorithms for decision tree learning and proves that some popular and empirically successful heuristics that are based on first principles meet the criteria of an independently motivated theoretical model. Expand

Boosting and Other Ensemble Methods

- Computer Science
- Neural Computation
- 1994

A surprising result is shown for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. Expand

Bias, Variance , And Arcing Classifiers

- Computer Science
- 1996

This work explores two arcing algorithms, compares them to each other and to bagging, and tries to understand how arcing works, which is more sucessful than bagging in variance reduction. Expand

Boosting Decision Trees

- Computer Science
- NIPS
- 1995

A constructive, incremental learning system for regression problems that models data by means of locally linear experts that does not compete for data during learning and derives asymptotic results for this method. Expand

C4.5: Programs for Machine Learning

- Computer Science
- 1992

A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Expand

Boosting Performance in Neural Networks

- Computer Science
- Int. J. Pattern Recognit. Artif. Intell.
- 1993

The boosting algorithm is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems and improved performance significantly, and, in some cases, dramatically. Expand