-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimization: use cached status everywhere. #410
Conversation
Codecov Report
@@ Coverage Diff @@
## next #410 +/- ##
==========================================
+ Coverage 68.51% 68.58% +0.06%
==========================================
Files 30 30
Lines 3097 3097
Branches 565 565
==========================================
+ Hits 2122 2124 +2
+ Misses 827 825 -2
Partials 148 148
Continue to review full report at Codecov.
|
(job,), project._get_default_directives(), names=["group2"] | ||
job_ops = list( | ||
project._get_submission_operations( | ||
aggregates=[(job,)], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change was from a bug in the tests. When _get_submission_operations
is called, it requires a sequence of aggregates, not a single aggregate. We validate this in the public method that calls _get_submission_operations
, but this test calls an internal private method. As a result, this test was being fed invalid data and was passing for the wrong reason. Somewhere (perhaps _is_selected_aggregate
), a single job was being compared to an aggregate containing only that job, and that made the test pass because no job operations were generated. I only found this problem because of a mistake in the Job.__eq__
method that was introduced in PR 442 and is being fixed in PR 455. I have expanded the test to ensure that non-zero (submission/run) job operations are created and fixed the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only have one comment to address/comment on. The PR looks good and it will be nice to reduce file IO some more.
flow/project.py
Outdated
@@ -2042,6 +2033,33 @@ def scheduler_jobs(self, scheduler): | |||
""" | |||
yield from self._expand_bundled_jobs(scheduler.jobs()) | |||
|
|||
def _get_cached_status(self, cached_status=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only place where the cached_status
argument is currently passed is in get_job_status
. I would be inclined to just have an if cached_status is None
check in that function rather than optionally pass a cached_status
argument solely for get_job_status
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good idea. I was hoping to do that but I didn't recognize how easy it would be to simplify. 😄
@b-butler Thanks for the review. I simplified the method as you suggested. Feel free to merge this once tests pass, if you're happy with it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
… and documentation. (#427) * Change the submission ID for aggregate logic (#334) Changes the submission id generation to prepare for aggregation. The `__str__` method of `_JobOperation` is also changed to better handle job aggregation. The unique id generated using a hash of all the `_JobOperation` object's data is still the same, so the uniqueness should not be in question; the only change is to the readable part of the id. The change to the `str` implementation mimics NumPy, and the submission id favors brevity. * Add aggregator classes in flow (#348) Add a new decorator by which operations can be tagged for various types of aggregation, along with classes to store actual aggregates once they have been generated. * Fix hashing of aggregate classes, rename reverse_order to sort_ascending (#351) * Use sort_ascending instead of reverse_order. * Fix and simplify hash methods. * Clarify docstrings. * Initialize list/dict with literals. * Use one-line function instead of defining a separate method. * Don't wrap line. * class -> object. * use itertools reference while using groupby * Use reference for zip_longest too * Document _get_unique_function_id Co-authored-by: Hardik Ojha <[email protected]> * Add default aggregate support to flow (#335) Change the internals of flow so that everything operates on default aggregates (aggregates of size 1) rather than individual jobs. * Use tqdm.auto for better notebook compatibility. (#371) * Remove unused attributes/methods. (#380) * Remove unused __init__, the implicit parent constructor is equivalent. * Use public .job attribute. * Remove unused argument from _add_operation_selection_arg_group. * Use logger.warning instead of deprecated logger.warn. * Refactor error variables and messages. (#373) * Remove unused **kwargs from _FlowProjectClass metaclass constructor. * Raise ImportError instead of RuntimeWarning if pprofile is not installed (it's a hard error). * Deprecate CPUEnvironment and GPUEnvironment (#381) * Deprecate CPUEnvironment and GPUEnvironment. The classes are unused and untested. * Remove unused private functions for importing from files. * Update changelog. * Make _verify_group_compatibility a class method. (#379) * Use isinstance check for IgnoreConditions. (#378) * Remove dangling else/elif clauses after raise/return statements. (#377) Introduces stylistic changes suggested by pylint. * Use f-string. * Linting dicts. (#374) * Refactor dicts, OrderedDict, and comprehensions. Resolves #323. * Import Mapping from collections.abc. * Use {} * Use descriptive variable names. (#376) * Use longer/clearer variable names. * Update _no_aggregation. * Rename _operation_to/from_tuple to _job_operation_to/from_tuple Co-authored-by: Brandon Butler <[email protected]> * Make _to_hashable a private function. (#384) * Make _to_hashable private. * Copy _to_hashable method from signac. * Adding/improving docstrings and comments. (#375) * Add and improve docstrings. * Minor edits to docstrings. * Use shorter one-line summaries in some long docstrings. * Cleanup docstrings, variable names. (#372) * Clean up docs and variables relating to pre-conditions and post-conditions. * Clean up docstrings, variable names, defaults, remove obsolete helper functions. * Use f-strings, avoid second-person "you". * Use full name for template_filters. * Docstring revisions. * Use job_id consistently (adds to #363). (#386) * Fix other instances of job_id missed in #363. * Update changelog. * Use descriptive variable name job_id. * Add all missing docstrings, enforce pydocstyle. (#387) Validates docstrings using pydocstyle and adds all the necessary documentation for this to pass. * Revise Sphinx docs. (#389) * Enforce pydocstyle more broadly. * Ignore hidden folders. * Use temporary config that allows missing docstrings. * Apply changes from pydocstyle and minor edits for clarity. * Add more docstrings. * Re-enable D102. * Add all missing docstrings for public methods. * Update changelog. * Remove duplicate summary of FlowProject. * Don't use syntax highlighting in CLI help message. * Revise environment docstrings. * Update link for Comet docs. * Document class property ALIASES. * Revise copy_from. * Revise label docstring. * Add operations and base environment classes to docs. * Standardize references to signac Jobs. * Update APIs and references. * Revise docstrings. * Rename dummy state to placeholder. * Update ComputeEnvironmentType docstrings. * Explicitly list classes/functions to document. * Remove all API from root of scheduling subpackage. * Revise aggregation docs. * Improve IgnoreConditions docs. * Remove :py: (role is unnecessary). * Docstring revisions. * Apply suggestions from code review Co-authored-by: Vyas Ramasubramani <[email protected]> * Apply suggestions from code review Co-authored-by: Vyas Ramasubramani <[email protected]> * Update flow/project.py Co-authored-by: Bradley Dice <[email protected]> * Update flow/project.py * Clarify simple-scheduler purpose. Co-authored-by: Vyas Ramasubramani <[email protected]> * Use NumPy style docstrings. (#392) * Use napoleon to render numpydoc style. * NumPy-doc-ify aggregates. * Fix name of aggregator class. * Revisions to environment docstrings. * Revisions to aggregate docstrings. * NumPy-doc-ify INCITE environments. * NumPy-doc-ify remaining environments. * NumPy-doc-ify render_status. * NumPy-doc-ify scheduler base classes. * NumPy-doc-ify LSF scheduler. * NumPy-doc-ify simple scheduler. * NumPy-doc-ify SLURM scheduler. * NumPy-doc-ify Torque scheduler. * NumPy-doc-ify template. * NumPy-doc-ify testing. * NumPy-doc-ify util. * NumPy-doc-ify project. * Fix remaining sphinx-style docstrings. * Updated docstrings with velin (https://github.com/Carreau/velin). * Adopt numpy convention for pydocstyle. * Remove methods that are implemented identically in the parent class. * Add call operators to docs. * Fix FlowGroup reference. * Reformat examples. * Remove trailing colons where no type is specified. * Fix class reference. * Fix (deprecated) JobOperation references. * Use preconditions/postconditions. * Add changelog line. * Apply suggestions from code review Co-authored-by: Brandon Butler <[email protected]> * Move env argument. * Fix typo. * Apply suggested changes. * Add docs to simple-scheduler, don't git ignore the bin directory containing simple-scheduler. Co-authored-by: Brandon Butler <[email protected]> * Optimization: Don't use min_len_unique_id in str(_JobOperation). (#400) * Don't call min_len_unique_id when converting _JobOperation to a string. It's O(N) and is called O(N) times, leading to O(N^2) behavior and significant performance penalties. * Update template reference data to use long job ids. * Evaluate run commands lazily. (#396) Cache lazily generated run commands to avoid side affects associated with the generation. Also avoids introducing any side effects due to f-string evaluation when generating logging entries. * Add missing raise statement. * Initialize variables early to prevent unbound values. * Use same boolean logic as elsewhere for determining eligibility. * Use job id instead of string call. * Try to return immediately instead of calling both __contains__ and __getitem__. * Add codecov support to CI. (#405) * Add codecov support. * Remove mock (not used). * Get executable directive at call time instead of definition time (#402) * Make DefaultAggregateStore picklable (#383) * Make default aggregate class picklable * Refactor pickling * Refactor code comments * Add __getstate__, __setstate__ methods to FlowProject * Make cloudpickle a hard dependancy * Fix linting issues * Minor changes to variable names. * Use f-string. * Solve circular reference issue during unpickling by pre-computing hash value. * Cache actual hash value. Co-authored-by: Bradley Dice <[email protected]> * Show total for aggregate status info. * Simplify implementation of __iter__, avoid duplicated logic. * Optimize _get_aggregate_from_id to avoid open_job call. Speeds up function by about 2x for default aggregators. * Fail gracefully when expanding bundles if no jobs are on the scheduler (e.g. using TestEnvironment with a fake scheduler). * Minor changes to variable names and docstrings. * Fix Iteration Bug on next and add AggregatesCursor to flow (#390) * Change method name, fix return for _select_jobs_from_args * Add AggregatesCursor class to flow * make AggregatesCursor private * Fix import error * Simplify iteration logic. * Update docstring. * Fix iteration order. * Use set comprehension. * Parse filter arguments in the CLI, pass through filters to JobsCursor. * Rename internal variable to _jobs_cursor. * Fix docstring errors. * Refactor jobs-to-aggregates conversion. * Update changelog. * Optimize __contains__ for case with no filters. * Simplify tqdm bar description. * Update comment. * Update comment. * Update comment. * Update variable name. Co-authored-by: Bradley Dice <[email protected]> * Small updates to docstrings and small refactors. * Optimization: use cached status everywhere. (#410) Prevents repeatedly checking the project document's file for status, by using a cached status for all methods that require submission status. Squashed commit messages: * Ensure that all status-fetching operations use a cached status dictionary. * Use cached status in script command. * Use cached_status and keyword arguments in tests. * Fix test of submission operations. * Update changelog. * Remove FlowGroup._get_status. Resolves #368. * Ensure that job operations are created. * Update docstring. * Remove pass-through argument for cached_status. * Optimize job id generation to avoid expensive JSON encoding step (it's a plain string). * Use context manager functionality of pools (requires Python 3.4+ and hadn't been updated). * Add explicit test for threaded parallelism. * Unify group and aggregate storage with a bi-map data structure. (#415) * Unify group and aggregate storage with a bidirectional map data structure. * Update comment. * Swap iteration order of generating run operations. * Improve __eq__ implementation. * Update comment. * Improve bidict.__delitem__. * Fix bidict docstring. * Only create aggregate stores once. * Add tests for bidict. * Make bidict private. * Make bidict a MutableMapping, add more bidict tests. * Add comment about insertion ordering. * Optimization: don't deepcopy directives (#421) * Modify _JobOperation construction to not deepcopy directives Deepcopying directives is expensive since it is not a pure data structure. This commit just converts the directives to a pure data structure (i.e. a ``dict``). In addition, we pass the user specified directives (as opposed to the environment specified directives) to ``_JobOperation``. This should improve the performance of run and submit. * Only evaluate directives when unevaluated. Adds a check to prevent the same directive from being evaluated twice. This should speed up the evaluation of multi-operation groups. * Add back (fixed) comment about user directives. * Update docstring. * Use dict constructor instead of dict comprehension. Co-authored-by: Bradley Dice <[email protected]> * Unify group/aggregate selection (#416) * Update argument name for get_aggregate_id. * Update hashing. * Add shared logic for group/aggregate selection. * Reduce redundancy in methods using group/aggregate selection. * Revert to by-job ordering in pending operations. * Rename test. * Add optimization if all selected groups share an aggregate store. * Refactor group aggregate store optimization. * Yield aggregate_id during selection iteration. Rename jobs to aggregate. Unify more selection logic. * Use aggregate id in group id generation. * Simplify _SubmissionJobOperation constructor. * Make order of run operations deterministic (avoid set subtraction). * Remove duplicate code for label fetching. * Update template reference data. * Remove _is_selected_aggregate method (replaced with "in" check). * Skip postconditions check for eligibility if the group/aggregate is known to be incomplete. * Rename variable. * Only consider selected aggregates. * Fix comment. * Rename blob to id_string. * Move check for zero selected groups. * Add TODO. * Add comment explaining eligibility ignoring postconditions. * Remove index parameter (#423) * Remove index parameter (resolves #418). * Update template reference data. * Remove the use of arguments scheduled to be removed in 0.12 (#424) * Remove the use of argument * Remove the use of env attribute while submit * Pass environment as keyword argument. * Remove env argument in a few missed places. * Fix template reference data. It was using mpiexec (default) and should use the cluster-specific MPI commands. I don't know why the reference data was wrong. * Remove extra comment about status parallelization. Co-authored-by: Bradley Dice <[email protected]> * Update tqdm requirement. * Deprecate unbuffered mode (#425) * Always buffer, add deprecation warning if use_buffered_mode=False. * Remove unbuffered tests. * Reduce number of jobs to speed up tests. * Update changelog. * Update deprecation notice. * Refactoring tasks (#426) * Refactor status rendering into a private function. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Renamed fakescheduler to fake_scheduler. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Make environment metaclass private. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Remove unused status.py file. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Remove out argument from templates. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Refactor scheduler classes, unify code. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Remove metaclass from docs. Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Update docstrings. * Update docstrings. * Update changelog. * Update docstrings. * Update changelog.txt * Update changelog.txt Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> * Refactor aggregation (#422) Cleans up various internals associated with aggregation, introducing additional abstraction layers and adding docstrings. * Refactor aggregation. * Fix failing tests from refactor. * Improve clarity and coverage of tests. * Add more tests to enforce that aggregates are tuples. * Expand tests of default aggregator. * Update tests. * Unify __iter__. * __getitem__ should raise a KeyError if not found. * Update error message. * Update flow/aggregates.py Co-authored-by: Vyas Ramasubramani <[email protected]> * Simplify _register_aggregates method. (#430) * New _fetch_status code. (#417) Rewrite the _fetch_status method, which was extremely large and unwieldy. The new code is more efficient since it avoids making multiple passes through the jobs. It also uses some newly introduced functions and context managers as well as newer tqdm functions to clean up existing code. * Initial rewrite of _fetch_status. * Update _fetch_scheduler_status to require aggregates. * Clarify docstring for status_parallelization. * Experiment with callback approach (only iterate over aggregates/groups once). * Skip evaluation of post-conditions during eligibility if known to be incomplete. * Change operations to groups. * Use context manager for updating the cached status in bulk. * Separate scheduler query from cached status update. * Fix docstring. * Use context manager for _fetch_scheduler_status cache update. * Use cached status context manager for _fetch_status. * Refactor status updates so the scheduler query happens on-the-fly and only considers selected groups/aggregates. * Intermediate commit with notes. * Simplify eligibility check. * Update docstring. * Update tests to account for removed method. * Rename cid to cluster_id. * Define parallel executor. * Only update status if the update dictionary has data. * Add parallel execution. * Remove status callback. * Clean up status fetching. * Remove unnecessary internal methods. * Fix test (scheduler is queried internally). * Rename jobs -> aggregate. * Fix bug in tests (the dynamically altered jobs should NOT be resubmitted, this was probably an error due to the use of cached status rather than querying the scheduler). * Show aggregate id in str of _JobOperation. * Fix script output. * Remove keyword argument. * Reset MockScheduler in setUp for all tests. * Mock root directory (can't override Project.root_directory() because it is used for all job document paths and buffering, and must reflect an actual path on disk). * Update template reference data. * Refactor mocked root. * Pickle function to be executed in parallel using cloudpickle so that it can be sent across threads. * Fix pre-commit hooks. * Fix root directory mocking. * Improve progress iterators. * Use chunked job label fetching. * Refactor parallel executor to accept one iterable and use Pool for process parallelism. * Use integers in the cached status update. * Fix mocking of system executable. Resolves #413. * Update changelog. * Mock environment directly rather than using **kwargs for compatibility with signac 1.0.0. * Buffer during project status update. * Use ordered results. * Don't buffer during status update (no performance difference). * Refactor job labels so that the list of individual jobs is generated during the same single loop over the project to generate aggregates. * Update flow/util/misc.py Co-authored-by: Vyas Ramasubramani <[email protected]> * Fix function parameters in test, use kwargs for _fetch_status. * Use process_map. * Use MOCK_EXECUTABLE. * Add comments explaining use of FakeScheduler. * Collect errors that occur during status evaluation. * Mock the id generation method instead of injecting mocked attributes. Co-authored-by: Vyas Ramasubramani <[email protected]> Co-authored-by: Vyas Ramasubramani <[email protected]> * Change reference name from fakescheduler to fake_scheduler (#431) * Make class aggregator and method get_aggregate_id private (#432) * Make aggregator and get_aggregate_id private * Remove reference from docs * Use private name for _aggregator in test_aggregates.py. * Remove aggregates from documentation, add TODOs. * Fix docstring reference. * Update changelog. Co-authored-by: Bradley Dice <[email protected]> Co-authored-by: Hardik Ojha <[email protected]> Co-authored-by: Vyas Ramasubramani <[email protected]> Co-authored-by: Brandon Butler <[email protected]> Co-authored-by: Alyssa Travitz <[email protected]> Co-authored-by: Vyas Ramasubramani <[email protected]>
Description
This PR ensures that all status-fetching operations use a cached status dictionary.
The cached status information was already being used when printing status info, but not for submission.
I refactored the existing code to reduce duplication. I changed the APIs of several internal methods to require a
cached_status
argument, instead of allowing None.Motivation and Context
The benchmarks I was running locally kept fetching
project.document["_status"]
from the disk and converting from a synced dict to a plain dict. This loading and recursive conversion to a plain dict is slow for large workspaces and only needs to be called once at the beginning.Also resolves #368.
Types of Changes
1The change breaks (or has the potential to break) existing functionality.
Checklist:
If necessary: