Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CodeCov would be reduced by current solution for issue #153 #356

Closed
alex-senger opened this issue Jun 9, 2023 · 2 comments
Closed

CodeCov would be reduced by current solution for issue #153 #356

alex-senger opened this issue Jun 9, 2023 · 2 comments
Labels
testing 🧪 Additional automated tests

Comments

@alex-senger
Copy link
Contributor

alex-senger commented Jun 9, 2023

In Issue #153 a few lines of code were commented out to keep CodeCov at 100%. These lines contain a try-catch block to catch a possible LearningError and PredictionError for errors that we already cover. It may be possible that we didn't cover every Error and these would still be thrown. We cannot think of a case and can't write tests for these as a consequence.

Either we remove these lines completely or we take into account that CodeCov is not at 100% anymore.

Link to the PR: #355

@github-project-automation github-project-automation bot moved this to Backlog in Library Jun 9, 2023
@zzril zzril added the testing 🧪 Additional automated tests label Jun 11, 2023
@zzril zzril changed the title CodeCov is being reduced because of issue #153 CodeCov would be reduced by current solution for issue #153 Jun 11, 2023
@zzril
Copy link
Contributor

zzril commented Jun 11, 2023

We could implement dummy models that just always raise the required exception.
E.g. a ValueErrorOnFitClassifier that will always raise a ValueError when calling fit (or predict) on it, then a ValueErrorOnPredictClassifier that only raises one when calling predict etc.

Then use one of those dummy models whenever we can't cover some line using the normal ones.

@Marsmaennchen221 Marsmaennchen221 closed this as not planned Won't fix, can't repro, duplicate, stale Jun 16, 2023
@github-project-automation github-project-automation bot moved this from Backlog to ✔️ Done in Library Jun 16, 2023
@Marsmaennchen221
Copy link
Contributor

We could implement dummy models that just always raise the required exception. E.g. a ValueErrorOnFitClassifier that will always raise a ValueError when calling fit (or predict) on it, then a ValueErrorOnPredictClassifier that only raises one when calling predict etc.

Then use one of those dummy models whenever we can't cover some line using the normal ones.

This solution will be implemented in #355 so this issue is irrelevant

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing 🧪 Additional automated tests
Projects
Archived in project
Development

No branches or pull requests

3 participants