fix(scheduler): prevent duplicate jobs being queued #11826
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #11712
Fixes #11807
I still need to write some tests for this, but the existing tests pass.
A job should only appear in the active part of the scheduler queue once. Currently it can be added multiple times, with the job being added again every time a reactive dependency changes.
The
QUEUED
flag should be set for all jobs that are currently in the queue, not just those with a particular value forALLOW_RECURSE
.The
ALLOW_RECURSE
flag is only relevant if a job is re-queued while that same job is currently running. Here I've implemented that by clearing the flagQUEUED
immediately prior to running the job ifALLOW_RECURSE
is set. Otherwise the flag is cleared after the job runs.It isn't clear to me how errors should be handled. For the main queue, I assume the
QUEUED
flags all need to be unset in thefinally
block, as an error may have been thrown during flushing that prevented some of the flags being unset. For the post queue there doesn't seem to be equivalent error handling and I'm not sure if that should be added.