Gradient Boosting is a Generalization of boosting to arbitrary loss function
Similar to AdaBoost, Gradient Boosting builds upon previous models by adding new models that compensate for the errors of earlier ones. However, it takes a different approach:
- Forward stagewise boosting adds at each steps the tree that reduces the loss the most (given the current model)
- Instead of updating sample weights at each learning stage, it trains new models on the residual errors from previous stages
- This focus on residual errors allows for more precise error correction and model improvement
Gradient Boosting Tools
for classification