-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
threads: avoid deadlock from recursive lock acquire #38487
Conversation
7b9c0c0
to
8864d52
Compare
@@ -61,10 +61,12 @@ Base.assert_havelock(l::SpinLock) = islocked(l) ? nothing : Base.concurrency_vio | |||
function lock(l::SpinLock) | |||
while true | |||
if _get(l) == 0 | |||
GC.enable_finalizers(false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't this be done just before returning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire point of this bug fix is that it must be done before lock acquisition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For comparison, jl_mutex_lock
only disables finalizers after jl_mutex_wait
returns. I thought that would correspond to disabling them just after if p == 0
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
jl_mutex_lock can get away with doing potentially suspect things because it can ensure they are indistinguishable to the user
8864d52
to
112101a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works fine with CUDA.jl (which had its own version of this).
The failure on Linux32 bit is related:
|
112101a
to
85ca118
Compare
Okay, I changed the implementation, as adding an error check for that revealed a number of violators. Should be happy now? |
85ca118
to
a4f8ea4
Compare
Okay, I changed the implementation, as that was not sufficient. Should be happy now? Unrelated, I also reduced the size of jl_task_t here by 2 pointers, to be consistent in the implementation. |
Linux 64bit is still not happy. |
Okay, I changed the implementation, as I missed a line of code, and added some assertions for it. Should be happy now? Will squash now, as CI is green. |
fb9cfe9
to
64b71a5
Compare
Finalizers can't safely acquire many essential locks (such as the iolock, to cleanup libuv objects) if they are run inside another lock. Therefore, inhibit all finalizers on the thread until all locks are released (previously, this was only true for our internal locks).
64b71a5
to
72bebdf
Compare
Finalizers can't safely acquire many essential locks (such as the iolock, to cleanup libuv objects) if they are run inside another lock. Therefore, inhibit all finalizers on the thread until all locks are released (previously, this was only true for our internal locks). (cherry-picked from 59aedd1)
Finalizers can't safely acquire many essential locks (such as the iolock, to cleanup libuv objects) if they are run inside another lock. Therefore, inhibit all finalizers on the thread until all locks are released (previously, this was only true for our internal locks). (cherry-picked from 59aedd1)
Finalizers can't safely acquire many essential locks (such as the iolock, to cleanup libuv objects) if they are run inside another lock. Therefore, inhibit all finalizers on the thread until all locks are released (previously, this was only true for our internal locks). (cherry-picked from 59aedd1)
Finalizers can't safely acquire many essential locks (such as the iolock, to cleanup libuv objects) if they are run inside another lock. Therefore, inhibit all finalizers on the thread until all locks are released (previously, this was only true for our internal locks). (cherry-picked from 59aedd1)
…ia#38487) (JuliaLang/julia#38753) Finalizers can't safely acquire many essential locks (such as the iolock, to cleanup libuv objects) if they are run inside another lock. Therefore, inhibit all finalizers on the thread until all locks are released (previously, this was only true for our internal locks). (cherry-picked from ec8466b)
Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).