You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a dask maintainer, I want to trust the code coverage report.
Our coverage badge is a bit misleading showing coverage below 90%. This is due to us not collecting coverage in a few places. Also, we simply have a few modules which are only there for debugging and/or historical reasons
The most relevant parts (scheduler, worker, etc.) do have quite good coverage. I believe the <90% batch doesn't reflect well on the project and the wrong configuration creates a lot of noise making it harder to see relevant gaps (there are quite a few)
Things I spot immediately wehich we should exclude/ignore (non exhaustive)
Exclude nvml calls
Exclude UCX
Exclude LOG_PDB brances
Ignore pytest_resourceleaks
Ignore _version.py (vendored)
Pragma no cover
We have cases with relatively long switch-like statements ending in a final exception, e.g. ValueError "unknown value"
Conditional imports may need be covered with pragma statements and/or a dedicated job
There are other modules which we are not entirely sure. If it is not straight forward, this can be discussed in the PR review, e.g.
distributed/deploy/old_ssh.py
distributed/cli/dask_ssh.py
distributed/nanny.py
Long term goal:
We should have 100% test coverage
If there are weird edge cases where an extremely complicated test would be required to test an unimportant test case. For these cases we want to use deliberate nopragma instructions
AC
All very obvious cases should have a pragma statement and/or exclude configuration option
Debatable cases should still be left uncovered and be handled in a follow up ticket
Developer documentation is updated with a guideline
The text was updated successfully, but these errors were encountered:
As a dask maintainer, I want to trust the code coverage report.
Our coverage badge is a bit misleading showing coverage below 90%. This is due to us not collecting coverage in a few places. Also, we simply have a few modules which are only there for debugging and/or historical reasons
The most relevant parts (scheduler, worker, etc.) do have quite good coverage. I believe the <90% batch doesn't reflect well on the project and the wrong configuration creates a lot of noise making it harder to see relevant gaps (there are quite a few)
Things I spot immediately wehich we should exclude/ignore (non exhaustive)
Pragma no cover
Relevant documentation https://coverage.readthedocs.io/en/coverage-4.3.3/excluding.html I would hope us being able to do this without code modification, if possible. A proper coverage.rc should be sufficient
There are other modules which we are not entirely sure. If it is not straight forward, this can be discussed in the PR review, e.g.
Long term goal:
AC
The text was updated successfully, but these errors were encountered: