-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature proposal] Load best model after early stopping #4052
Comments
Seems to be a nice feature. Will look into this in the future. @hcho3 WDYT? |
@trivialfis @PhilipMay Can you clarify what you mean? Currently, the scikit-learn interface automatically uses the best iteration when predicting: xgboost/python-package/xgboost/sklearn.py Lines 419 to 421 in 1fc37e4
|
@hcho3 This is very interesting. So with the scikit-learn interface it should work the way I would like it to work (load or use the best model). But what about the situation where you use the Booster class and do not use the scikit-learn interface? In the Booster class it just says:
See here: xgboost/python-package/xgboost/core.py Line 1190 in 1fc37e4
For me this means that it does not use the best model from early stopping. |
Yes, it appears that currently you need to use the scikit-learn interface to do what you want. (And use |
Yes - and it would be nice to bring scikit-learn interface and the Booster interface to a consistent state where they both bring the best results form early stopping by default. Or this this just a documentation bug? |
The issue is that only the scikit-learn interface saves the best number of round; the Booster object does not save it. As I said, it would be cleaner to simply truncate the model at the time of serializing ( |
Could you please implement a way to load the best model after ealy stopping? LightGBM is also doing this by default. See here: https://lightgbm.readthedocs.io/en/latest/Python-Intro.html#early-stopping
That would be a great improvement to my hyperparameter optimization workflow.
Thanks
Philip
The text was updated successfully, but these errors were encountered: