-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release blocker][Autoscaler] Randomly delete Pods when scaling down the cluster #1251
Merged
kevin85421
merged 6 commits into
ray-project:master
from
kevin85421:autoscaler-scale-down
Jul 19, 2023
Merged
[release blocker][Autoscaler] Randomly delete Pods when scaling down the cluster #1251
kevin85421
merged 6 commits into
ray-project:master
from
kevin85421:autoscaler-scale-down
Jul 19, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kevin85421
changed the title
[Bug][Autoscaler] Randomly delete Pods when scaling down the cluster
[release blocker][Autoscaler] Randomly delete Pods when scaling down the cluster
Jul 19, 2023
2 tasks
gvspraveen
reviewed
Jul 19, 2023
gvspraveen
reviewed
Jul 19, 2023
gvspraveen
approved these changes
Jul 19, 2023
kevin85421
added a commit
to kevin85421/kuberay
that referenced
this pull request
Jul 20, 2023
…the cluster (ray-project#1251) Randomly delete Pods when scaling down the cluster
kevin85421
added a commit
to kevin85421/kuberay
that referenced
this pull request
Jul 20, 2023
…the cluster (ray-project#1251) Randomly delete Pods when scaling down the cluster
kevin85421
added a commit
that referenced
this pull request
Jul 20, 2023
lowang-bh
pushed a commit
to lowang-bh/kuberay
that referenced
this pull request
Sep 24, 2023
…the cluster (ray-project#1251) Randomly delete Pods when scaling down the cluster
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why are these changes needed?
Remove the feature gate
PrioritizeWorkersToDelete
: The default value of this feature flag has been set to true for a year. ([autoscaler] Flip prioritize-workers-to-delete feature flag #379)There are two root causes for this issue:
runningPods
are also present inworkersToDelete
, they will be deleted, but will not be removed fromrunningPods
without this pull request. Hence, KubeRay will randomly delete the same number of Pods as those present in bothrunningPods
andworkersToDelete
. Refer to [Bug] pod randomly deleted by operator in scale-in #1192 (comment) for more details.isPodRunningOrPendingAndNotDeleting(pod)
will always be false becausepod
here does not setpod.Status.Phase
.kuberay/ray-operator/controllers/ray/raycluster_controller.go
Lines 562 to 580 in e9a2698
Why couldn't
TestReconcile_RemoveWorkersToDelete_OK
detect this double deletion issue earlier?workersToDelete
in the test has pod1 and pod2. When the test callsreconcilePods
, these two Pods will be deleted by this line. These two Pods are still inrunningPods
and the KubeRay will randomly delete the first two Pods in the PodList in this line. For the fake Kubernetes client, the first two Pods in the PodList will always be pod1 and pod2. Hence, the test will not fail.workersToDelete
to pod3 and pod4 in (DO NOT MERGE) #1251 Control Group #1252. Both pod3 and pod4 will be deleted, and pod1 and pod2 will be deleted by the random Pod deletion because they are the first two Pods in the PodList. (GitHub Actions)Related issue number
Closes #1192
Checks
Reproduce