Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

align verbosity options with documented behavior #682

Merged
merged 5 commits into from
May 9, 2023
Merged

Conversation

simonpcouch
Copy link
Contributor

Closes #677. That issue points out that the verbose option is, in most cases, the option actually controlling Bayesian tuning output. This PR follows up on #547 and aims to disambiguate verbose and verbose_iter to better align with the documented functionality.

Many of the changes here transition if (control$verbose) to if (isTRUE(control$verbose_iter)), since verbose_iter isn't in control_grid() output. This also adds an iter = FALSE argument to tune_log(), a helper that is used in many different places for many different things—iter indicates that this is an update on the iterative search, which need not necessarily be an issue that triggers the catalog or log_problems().

With this PR:

library(parsnip)
library(rsample)
library(tune)
set.seed(1)

spec <- nearest_neighbor("regression", neighbors = tune())
folds <- vfold_cv(mtcars, v = 3)
init <- tune_grid(spec, mpg ~ ., folds)

res_none <- 
  tune_bayes(
    spec, mpg ~ ., folds, iter = 2, initial = init,
    control = control_bayes()
  )

res_verbose_iter <- 
  tune_bayes(
    spec, mpg ~ ., folds, iter = 2, initial = init,
    control = control_bayes(verbose_iter = TRUE)
  )
#> Optimizing rmse using the expected improvement
#> 
#> ── Iteration 1 ─────────────────────────────────────────────────────────────────
#> 
#> i Current best:      rmse=2.89 (@iter 0)
#> i Gaussian process model
#> i Generating 7 candidates
#> i Predicted candidates
#> i neighbors=1
#> ♥ Newest results:    rmse=2.85 (+/-0.289)
#> 
#> ── Iteration 2 ─────────────────────────────────────────────────────────────────
#> 
#> i Current best:      rmse=2.85 (@iter 1)
#> i Gaussian process model
#> i Generating 6 candidates
#> i Predicted candidates
#> i neighbors=3
#> ⓧ Newest results:    rmse=2.948 (+/-0.345)

res_verbose <- 
  tune_bayes(
    spec, mpg ~ ., folds, iter = 2, initial = init,
    control = control_bayes(verbose = TRUE)
  )
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 7 candidates
#> i Predicted candidates
#> i Estimating performance
#> i Fold1: preprocessor 1/1
#> ✓ Fold1: preprocessor 1/1
#> i Fold1: preprocessor 1/1, model 1/1
#> ✓ Fold1: preprocessor 1/1, model 1/1
#> i Fold1: preprocessor 1/1, model 1/1 (extracts)
#> i Fold1: preprocessor 1/1, model 1/1 (predictions)
#> i Fold2: preprocessor 1/1
#> ✓ Fold2: preprocessor 1/1
#> i Fold2: preprocessor 1/1, model 1/1
#> ✓ Fold2: preprocessor 1/1, model 1/1
#> i Fold2: preprocessor 1/1, model 1/1 (extracts)
#> i Fold2: preprocessor 1/1, model 1/1 (predictions)
#> i Fold3: preprocessor 1/1
#> ✓ Fold3: preprocessor 1/1
#> i Fold3: preprocessor 1/1, model 1/1
#> ✓ Fold3: preprocessor 1/1, model 1/1
#> i Fold3: preprocessor 1/1, model 1/1 (extracts)
#> i Fold3: preprocessor 1/1, model 1/1 (predictions)
#> ✓ Estimating performance
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 6 candidates
#> i Predicted candidates
#> i Estimating performance
#> i Fold1: preprocessor 1/1
#> ✓ Fold1: preprocessor 1/1
#> i Fold1: preprocessor 1/1, model 1/1
#> ✓ Fold1: preprocessor 1/1, model 1/1
#> i Fold1: preprocessor 1/1, model 1/1 (extracts)
#> i Fold1: preprocessor 1/1, model 1/1 (predictions)
#> i Fold2: preprocessor 1/1
#> ✓ Fold2: preprocessor 1/1
#> i Fold2: preprocessor 1/1, model 1/1
#> ✓ Fold2: preprocessor 1/1, model 1/1
#> i Fold2: preprocessor 1/1, model 1/1 (extracts)
#> i Fold2: preprocessor 1/1, model 1/1 (predictions)
#> i Fold3: preprocessor 1/1
#> ✓ Fold3: preprocessor 1/1
#> i Fold3: preprocessor 1/1, model 1/1
#> ✓ Fold3: preprocessor 1/1, model 1/1
#> i Fold3: preprocessor 1/1, model 1/1 (extracts)
#> i Fold3: preprocessor 1/1, model 1/1 (predictions)
#> ✓ Estimating performance

res_both <- 
  tune_bayes(
    spec, mpg ~ ., folds, iter = 2, initial = init,
    control = control_bayes(verbose = TRUE, verbose_iter = TRUE)
  )
#> Optimizing rmse using the expected improvement
#> 
#> ── Iteration 1 ─────────────────────────────────────────────────────────────────
#> 
#> i Current best:      rmse=2.89 (@iter 0)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 7 candidates
#> i Predicted candidates
#> i neighbors=1
#> i Estimating performance
#> i Fold1: preprocessor 1/1
#> ✓ Fold1: preprocessor 1/1
#> i Fold1: preprocessor 1/1, model 1/1
#> ✓ Fold1: preprocessor 1/1, model 1/1
#> i Fold1: preprocessor 1/1, model 1/1 (extracts)
#> i Fold1: preprocessor 1/1, model 1/1 (predictions)
#> i Fold2: preprocessor 1/1
#> ✓ Fold2: preprocessor 1/1
#> i Fold2: preprocessor 1/1, model 1/1
#> ✓ Fold2: preprocessor 1/1, model 1/1
#> i Fold2: preprocessor 1/1, model 1/1 (extracts)
#> i Fold2: preprocessor 1/1, model 1/1 (predictions)
#> i Fold3: preprocessor 1/1
#> ✓ Fold3: preprocessor 1/1
#> i Fold3: preprocessor 1/1, model 1/1
#> ✓ Fold3: preprocessor 1/1, model 1/1
#> i Fold3: preprocessor 1/1, model 1/1 (extracts)
#> i Fold3: preprocessor 1/1, model 1/1 (predictions)
#> ✓ Estimating performance
#> ♥ Newest results:    rmse=2.85 (+/-0.289)
#> 
#> ── Iteration 2 ─────────────────────────────────────────────────────────────────
#> 
#> i Current best:      rmse=2.85 (@iter 1)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 6 candidates
#> i Predicted candidates
#> i neighbors=3
#> i Estimating performance
#> i Fold1: preprocessor 1/1
#> ✓ Fold1: preprocessor 1/1
#> i Fold1: preprocessor 1/1, model 1/1
#> ✓ Fold1: preprocessor 1/1, model 1/1
#> i Fold1: preprocessor 1/1, model 1/1 (extracts)
#> i Fold1: preprocessor 1/1, model 1/1 (predictions)
#> i Fold2: preprocessor 1/1
#> ✓ Fold2: preprocessor 1/1
#> i Fold2: preprocessor 1/1, model 1/1
#> ✓ Fold2: preprocessor 1/1, model 1/1
#> i Fold2: preprocessor 1/1, model 1/1 (extracts)
#> i Fold2: preprocessor 1/1, model 1/1 (predictions)
#> i Fold3: preprocessor 1/1
#> ✓ Fold3: preprocessor 1/1
#> i Fold3: preprocessor 1/1, model 1/1
#> ✓ Fold3: preprocessor 1/1, model 1/1
#> i Fold3: preprocessor 1/1, model 1/1 (extracts)
#> i Fold3: preprocessor 1/1, model 1/1 (predictions)
#> ✓ Estimating performance
#> ⓧ Newest results:    rmse=2.948 (+/-0.345)

finetune is unaffected!👍

Copy link
Member

@EmilHvitfeldt EmilHvitfeldt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks clean! One clarifying question

R/logging.R Outdated Show resolved Hide resolved
@simonpcouch simonpcouch merged commit a23eb9a into main May 9, 2023
@simonpcouch simonpcouch deleted the verbosity-677 branch May 9, 2023 14:50
@github-actions
Copy link

This pull request has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.

@github-actions github-actions bot locked and limited conversation to collaborators May 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: tune_bayes() verbose options
2 participants