Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Review] Improve warning message when QN solver reaches max_iter #3515

Merged

Conversation

tfeher
Copy link
Contributor

@tfeher tfeher commented Feb 18, 2021

closes #2546

This PR improves the warning message printed when max iterations are reached during fitting a linear model.

Example:

import numpy as np
from cuml.linear_model import LogisticRegression
from sklearn.datasets import load_breast_cancer
X, y = load_breast_cancer(return_X_y=True)
y = y.astype(np.float64)
cls = LogisticRegression(penalty='none', C=1)
cls.fit(X, y)

This produces the following output, where the last line is added by this PR:

[W] [15:31:04.467478] L-BFGS: max iterations reached
[W] [15:31:04.467804] Maximum iterations reached before solver is converged. To increase model accuracy you can increase the number of iterations (max_iter) or improve the scaling of the input data.

@tfeher tfeher added bug Something isn't working non-breaking Non-breaking change labels Feb 18, 2021
@tfeher tfeher requested a review from a team as a code owner February 18, 2021 16:27
@tfeher tfeher added improvement Improvement / enhancement to an existing function and removed bug Something isn't working labels Feb 18, 2021
Copy link
Member

@dantegd dantegd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@dantegd
Copy link
Member

dantegd commented Feb 18, 2021

rerun tests

@codecov-io
Copy link

Codecov Report

Merging #3515 (ab6b917) into branch-0.19 (39c7262) will increase coverage by 7.58%.
The diff coverage is 71.72%.

Impacted file tree graph

@@               Coverage Diff               @@
##           branch-0.19    #3515      +/-   ##
===============================================
+ Coverage        71.77%   79.36%   +7.58%     
===============================================
  Files              212      225      +13     
  Lines            17075    17946     +871     
===============================================
+ Hits             12256    14243    +1987     
+ Misses            4819     3703    -1116     
Flag Coverage Δ
dask 44.12% <14.67%> (?)
non-dask 71.65% <69.97%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...on/cuml/_thirdparty/sklearn/preprocessing/_data.py 63.61% <ø> (+0.07%) ⬆️
python/cuml/experimental/explainer/common.py 88.05% <42.85%> (-4.01%) ⬇️
...l/_thirdparty/sklearn/preprocessing/_imputation.py 62.40% <50.00%> (-0.11%) ⬇️
python/cuml/neighbors/nearest_neighbors.pyx 92.43% <50.00%> (-0.29%) ⬇️
python/cuml/neighbors/ann.pyx 61.62% <61.62%> (ø)
python/cuml/common/import_utils.py 59.43% <66.66%> (+3.43%) ⬆️
python/cuml/experimental/explainer/base.pyx 67.06% <67.06%> (ø)
python/cuml/dask/common/utils.py 43.68% <83.33%> (+16.13%) ⬆️
python/cuml/experimental/explainer/kernel_shap.pyx 97.75% <100.00%> (+0.48%) ⬆️
...n/cuml/experimental/explainer/permutation_shap.pyx 98.82% <100.00%> (+0.84%) ⬆️
... and 69 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9845e26...ab6b917. Read the comment docs.

@dantegd
Copy link
Member

dantegd commented Feb 19, 2021

@gpucibot merge

@rapids-bot rapids-bot bot merged commit d393e1e into rapidsai:branch-0.19 Feb 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Logistic regression does not return fit status
3 participants