Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to add Reduce Learning Rate On Plateau in XGBoost?

You know in TensorFlow, we use a callback named ReduceLROnPlateau which reduce our learning rate slightly when our model stops learning. Does anyone know how to do this in XGBoost? I want to know if there is any way to reduce learning when XGBoost model stops learning.

like image 771
Sticky Avatar asked Sep 05 '25 03:09

Sticky


1 Answers

This article claims:

A quick Google search reveals that there has been some work done utilizing a decaying learning rate, one that starts out large and shrinks at each round. But we typically see no accuracy gains in Cross Validation and when looking at test error graphs there are minimal differences between its performance and using a regular old constant.

They wrote a Python package called BetaBoost to find an optimal sequence for the learning rate scheduler.

In principle they seem to use a function which returns learning rates for the LearningRateScheduler

from scipy.stats import beta
def beta_pdf(scalar=1.5,
             a=26,
             b=1,
             scale=80,
             loc=-68,
             floor=0.01,
             n_boosting_rounds=100):
    """
    Get the learning rate from the beta PDF
    Returns
    -------
    lrs : list
        the resulting learning rates to use.
    """
    lrs = [scalar*beta.pdf(i,
                           a=a, 
                           b=b, 
                           scale=scale, 
                           loc=loc) 
           + floor for i in range(n_boosting_rounds)]
    return lrs

[...]

xgb.train(
[...],
callbacks=[xgb.callback.LearningRateScheduler(beta_pdf())]
)
like image 196
Arigion Avatar answered Sep 07 '25 22:09

Arigion