Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve test coverage #5655

Merged
merged 10 commits into from
Jan 28, 2022
Merged

Improve test coverage #5655

merged 10 commits into from
Jan 28, 2022

Conversation

scharlottej13
Copy link
Contributor

@scharlottej13 scharlottej13 commented Jan 12, 2022

@GPUtester
Copy link
Collaborator

Can one of the admins verify this patch?

@scharlottej13
Copy link
Contributor Author

scharlottej13 commented Jan 20, 2022

Here's a comparison of all files with lines excluded from my local html coverage report, since I didn't see an equivalent from codecov.io (but maybe there's a better way to do this!) Also happy to share anything else from my local coverage report that might be helpful.

Module excluded (main) excluded (test-coverage)
distributed/scheduler.py 2 70
distributed/nanny.py 67 67
distributed/worker.py 0 28
distributed/diagnostics/nvml.py 0 22
distributed/system_monitor.py 0 15
distributed/stealing.py 0 7
distributed/active_memory_manager.py 5 5
distributed/core.py 0 4
distributed/dashboard/scheduler.py 0 2
distributed/dashboard/components/nvml.py 0 1

@scharlottej13 scharlottej13 marked this pull request as ready for review January 20, 2022 17:13
@scharlottej13 scharlottej13 changed the title [WIP] Improve test coverage Improve test coverage Jan 20, 2022
@fjetter
Copy link
Member

fjetter commented Jan 25, 2022

I'm wondering if we should adjust our target to 90% (what we have right now) or auto (such that it has to increase). Similarly, I'm wondering if we shouldn't set patch status reports to give better feedback about the code changes of a PR instead of only global coverage

cc @crusaderky @jrbourbeau

I personally would also be interested in trying out github checks (config) but we can defer this to a later iteration

@bryanwweber
Copy link
Contributor

I'm wondering if we should adjust our target to 90% (what we have right now) or auto (such that it has to increase).

@fjetter The only disadvantage of auto that I've found in other projects is that some PRs deliberately do not address coverage for one reason or another (including that Codecov has mis-measured coverage). So we might need to be prepared to merge PRs with this check "failing". But that might be a "deal with it later" problem.

Similarly, I'm wondering if we shouldn't set patch status reports to give better feedback about the code changes of a PR instead of only global coverage

In other projects I've found these status reports to be very useful, if occasionally noisy when Codecov doesn't update coverage correctly. Although that might be because the project I'm thinking of is a combined C++/Cython project

I personally would also be interested in trying out github checks (config) but we can defer this to a later iteration

Again from experience, I personally find these line-by-line comments to be somewhat annoying because they tend to get in the way of review comments, but that might be related to the nature of the other projects and the (lack of) coverage they have.

@@ -11,9 +11,21 @@ omit =
distributed/utils_test.py
distributed/deploy/ssh.py
distributed/_ipython_utils.py
distributed/_version.py
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can spot a bunch of files that don't exist anymore above this line - could you clean them up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

absolutely! thanks for spotting this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that omitting distributed/compatibility.py is unnecessary (there are no conditional code paths) and dangerous: if in the future the file will contain code for obsolete/exotic configurations, I'd rather have them either tested through specific github environments or explicitly and deliberately excluded with pragma: nocover.

I think that distributed/utils_test.py should not be excluded either. Most of its functions are delicate enough to warrant their own unit tests (which they should already have).

@crusaderky
Copy link
Collaborator

I don't think this PR closes the issue; specifically, it's missing:

  • We have cases with relatively long switch-like statements ending in a final exception, e.g. ValueError "unknown value"
  • Conditional imports may need be covered with pragma statements and/or a dedicated job
  • All very obvious cases should have a pragma statement and/or exclude configuration option
  • Developer documentation is updated with a guideline

It's OK if the above points happen in a later PR, but the reference issue should remain open until then.

@crusaderky
Copy link
Collaborator

Screenshot from the codecov report below, which makes me worried:
Screenshot from 2022-01-25 14-38-54

Clearly the PR has no actual impact on it. Why is this happening? Is it because a flaky test is failing before it reaches the line? Or is it highlighting a line that is randomly sometimes hit and sometimes not? In either case, if people need to start reading the coverage reports, this kind of noise is going to become a problem as bad as the flaky tests.

@crusaderky
Copy link
Collaborator

I'm wondering if we should adjust our target to 90% (what we have right now) or auto (such that it has to increase). Similarly, I'm wondering if we shouldn't set patch status reports to give better feedback about the code changes of a PR instead of only global coverage

Not sure about patch status reports, but inspecting the current codecov report for this PR took me an uncomfortable amount of clicking. It would be nice to have a single-page view of everything I missed. Or not - see my post above about what is clearly a high degree of noise.

@scharlottej13
Copy link
Contributor Author

Not sure about patch status reports, but inspecting the current codecov report for this PR took me an uncomfortable amount of clicking. It would be nice to have a single-page view of everything I missed. Or not - see my post above about what is clearly a high degree of noise.

If I'm understanding you correctly, this is something I was wondering about as well (and why the PR sat as a draft for a bit). The way I checked if a regex I added was working was clicking through the individual html reports (still a lot of clicking). It looks like there's also diff-cover, but it didn't quite work as I expected when I tried it. I can look into this more, though, to see if I can get it to work.

@crusaderky crusaderky merged commit 7b9f1e9 into dask:main Jan 28, 2022
@crusaderky
Copy link
Collaborator

LGTM!
As I feared, utils_test.py needs some attention - something to be discussed at the next standup.

@scharlottej13 scharlottej13 deleted the test-coverage branch January 28, 2022 15:18
gjoseph92 pushed a commit to gjoseph92/distributed that referenced this pull request Feb 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants