Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add score tree interval #5238

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

kylejn27
Copy link
Contributor

@kylejn27 kylejn27 commented Jan 27, 2020

What

Adds eval_start to control the start iteration for model evaluation during model training
Adds eval_interval to control the interval that model evaluation occurs during model training

Why

Implements #5090

@@ -1153,7 +1172,9 @@ def _dmat_init(group, **params):
evals=evals,
evals_result=evals_result, feval=feval,
verbose_eval=verbose, xgb_model=xgb_model,
callbacks=callbacks)
callbacks=callbacks,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for submitting a PR for this! I like the idea of selecting range of iterations for evaluation. But I really hope that we can implement it as a simple callback instead of parameters + callback. Might need some refactoring.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I think that's a good idea, I'll see what I can do

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trivialfis Finally have some time to get back to this.

I agree with you that it would be nice for this to be implemented as only a callback. Its entirely possible that I'm missing something obvious but I'm not sure if it can be without making significant changes to the code.

The main functionality added from this PR is the ability to skip the bst.eval_set method if the current iteration doesn't occur within the specified range (score_tree_interval).

I suppose modifying _train_internal to run bst.eval_set inside of a callback might work. This new score callback could be initialized with eval_start and eval_interval defaulting to 0 and 1 respectively. If the user wishes to change the defaults the user could manually add this callback with different initialization parameters. The only problem with this, would be that the state of evaluation_result_list would have to to be passed around between the different callbacks and the order that the callbacks are executed in would matter (this new callback would have to go first)

Happy to chat about this further, perhaps moving where the model evaluation occurs is the best solution

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kylejn27 Let me take a deeper look.

@trivialfis trivialfis self-assigned this Mar 6, 2020
@trivialfis
Copy link
Member

New callback is landed, I will revisit this feature.

@trivialfis trivialfis mentioned this pull request Feb 8, 2021
23 tasks
@trivialfis trivialfis mentioned this pull request Apr 12, 2021
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants