-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Fix potential deadlock in ray.cancel #29763
[core] Fix potential deadlock in ray.cancel #29763
Conversation
Do we have other |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know it's hard but is there a way to create a test for it?
@@ -159,6 +159,11 @@ OPTIMIZED = __OPTIMIZE__ | |||
|
|||
logger = logging.getLogger(__name__) | |||
|
|||
# The currently executing task, if any. These are used to synchronize task |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also any thoughts about how we can improve the thread model so it's obvious that the code has no deadlock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably the best would be if we could figure out a way to enforce that any Python callbacks don't hold any C++ locks. But I'm not sure how to do that with thread annotations.
I skimmed the current callbacks and nothing stood out, but hard to be certain :/ |
Not sure, unfortunately... test_cancel has historically been a bit flaky and this could be why. |
@stephanie-wang I think I've been hitting a similar issue in this PR, where task cancellation seems to be causing this task transport check to fail:
See the Ray Core It seems like the fact that this PR perturbs the worker startup time is making this flaky test more flaky... 😬 |
ray.cancel is executed by a background worker thread that invokes a Python callback to kill the current task. To avoid killing the wrong task, the worker holds the global CoreWorker lock during this process. However, this can cause deadlock because the worker also needs to acquire the GIL to call the kill callback. The GIL is usually held by the main thread, which is executing the normal task code. If this thread does not release the GIL to call CoreWorker methods, it can cause deadlock. To avoid the deadlock, we can either always release the GIL before calling any CoreWorker method, or we can change task cancellation to not hold the CoreWorker lock. I chose the latter for the fix; while it is generally good to release the GIL before calling CoreWorker methods, it's hard to guarantee that we are always doing this. Therefore, modifying task cancellation seems safer. This PR changes task cancellation to guard against the race condition in the Python code instead of relying on the CoreWorker lock. It does this by setting the current task ID during task execution, then holding a lock during task cancellation. This is guaranteed not to deadlock because we always acquire the GIL before the current task ID lock. Related issue number Closes ray-project#29739. Signed-off-by: Stephanie Wang [email protected] Signed-off-by: Weichen Xu <[email protected]>
Why are these changes needed?
ray.cancel
is executed by a background worker thread that invokes a Python callback to kill the current task. To avoid killing the wrong task, the worker holds the global CoreWorker lock during this process. However, this can cause deadlock because the worker also needs to acquire the GIL to call the kill callback. The GIL is usually held by the main thread, which is executing the normal task code. If this thread does not release the GIL to call CoreWorker methods, it can cause deadlock.To avoid the deadlock, we can either always release the GIL before calling any CoreWorker method, or we can change task cancellation to not hold the CoreWorker lock. I chose the latter for the fix; while it is generally good to release the GIL before calling CoreWorker methods, it's hard to guarantee that we are always doing this. Therefore, modifying task cancellation seems safer.
This PR changes task cancellation to guard against the race condition in the Python code instead of relying on the CoreWorker lock. It does this by setting the current task ID during task execution, then holding a lock during task cancellation. This is guaranteed not to deadlock because we always acquire the GIL before the current task ID lock.
Related issue number
Closes #29739.
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.