AddGBoost: A gradient boosting-style algorithm based on strong learners

Moshe Sipper, Jason H. Moore

Research output: Contribution to journalArticlepeer-review

Abstract

We present AddGBoost, a gradient boosting-style algorithm, wherein the decision tree is replaced by a succession of (possibly) stronger learners, which are optimized via a state-of-the-art hyperparameter optimizer. Through experiments over 90 regression datasets we show that AddGBoost emerges as the top performer for 33% (with 2 stages) up to 42% (with 5 stages) of the datasets, when compared with seven well-known machine-learning algorithms: KernelRidge, LassoLars, SGDRegressor, LinearSVR, DecisionTreeRegressor, HistGradientBoostingRegressor, and LGBMRegressor.
Original languageEnglish
Article number100243
Number of pages4
JournalMachine Learning with Applications
Volume7
DOIs
StatePublished - Mar 2022

Fingerprint

Dive into the research topics of 'AddGBoost: A gradient boosting-style algorithm based on strong learners'. Together they form a unique fingerprint.

Cite this