Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gh-87135: Hang non-main threads that attempt to acquire the GIL during finalization #28525
gh-87135: Hang non-main threads that attempt to acquire the GIL during finalization #28525
Changes from all commits
cbc2e33
0cb5f8f
703d799
40222ec
ea5cc54
e71b959
2b51995
8022ebe
334777e
6f85036
bdebcf8
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code looks quite complicated just to acquire the GIL.
Why not adding a variant of PyGILState_Ensure() which tries to acquire it, or return a special value if Python is exiting? It would be simpler to use than having to add 2 new function calls:
Code compatible with old Python version:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds like your suggestion is that the same
PyGILState_STATE
enum be used by both the existingPyGILState_Ensure
function, as well as the newPyGILState_TryAcquireFinalizeBlockAndGIL
function, and that the existingPyGILState_Release
function provide either its existing behavior, or the behavior ofPyGILState_ReleaseGILAndFinalizeBlock
, depending on thePyGILState_STATE
value.That is feasible, if we modify the definition of
PyGILState_STATE
as follows:Existing definition:
New definition:
The reason we need 3 new states rather than just 1 is because
PyGILState_Release
would need to know whether to release a finalize block or not.But it seems like it may be confusing to essentially overload the meaning of
PyGILState_Release
in this way, just to simplify code that needs to use finalize blocks (which is rare) and that needs to be compatible with Python < 3.12. After a few years, code will likely no longer support Python < 3.12, but this more confusing API will remain. But I can certainly make this change if that is desired.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Yhg1s - any thoughts on this
PyGILState_STATE
API design question?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand the point about PyGILState_Release() being confusing. I think the main question is whether there's a reasonable use-case for the finalize block outside of acquiring the GIL. For the use-case of making sure it's safe to acquire the GIL (and preventing finalization from starting while the GILState is held), I think the PyGILState API makes more sense. At that point it isn't exposed as a separate lock but as a new error condition (and a new safety guarantee when calling TryEnsure).
Framing it as "
PyGILState_TryEnsure()
is likePyGILState_Ensure()
except it also prevents finalization from starting while the GILState is held" makes sense to me. I don't think most users of the C API care if their threads are blocked forever when finalizing, so they can just keep using PyGILState_Ensure. I don't think that's more confusing.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT, this API is interpreter-specific, rather than tied to a thread or the global runtime. So
Py_AcquireFinalizeBlock()
might be more consistent with similar existing API (e.g.Py_NewInterpreterFromConfig()
,Py_EndInterpreter()
).That said, the proposed function relies on knowing the interpreter associated with the current thread (e.g.
PyInterpreterState_Get()
). I'd say we're trending away from that approach generally, and, ideally, we would not introduce new C-API that relies on that implicit knowledge. Instead, it may make more sense to add a variant instead:PyInterpreterState_AcquireFinalizeBlock()
.The caller would explicitly provide the interpreter that should be blocked:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given what I've said about "finalize block", consider alternate names:
PyInterpreterState_PreventFini()
(andPyInterpreterState_AllowFini()
)Py_PreventInterpreterFini()
(andPy_AllowInterpreterFini()
)PyInterpreterState_BlockFini()
(andPyInterpreterState_ReleaseFini()
)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While it is fairly straightforward to make these finalize blocks interpreter-specific, it is not clear to me, with my limited understanding of sub-interpreters, whether that is actually useful.
It isn't clear to me how
PyFinalize_Ex
interacts with multiple interpreters. It only seems to finalize the current interpreter.The documentation for
Py_EndInterpreter
states that the interpreter must "have no other threads". In fact it only does this check after calling the AtExit functions for the interpreter, so it seems it would be sufficient to ensure that all other thread states are destroyed before the AtExit functions finish. But there is also the question of what happens if we try to create a thread state whilePy_EndInterpreter
is still in progress.Py_EndInterpreter
doesn't seem to check for other thread states while holding the HEAD_LOCK, but that is not an issue as long as the check does not fail.In general, given the "no other threads" constraint for
Py_EndInterpreter
it seems that if other non-Python-created or daermon threads hold references to thePyInterpreterState
, then some external synchronization mechanism will be needed to ensure that they don't attempt to access thePyInterpreterState
once the "no other threads" check completes.As an example, suppose we have a C extension that provides a Python API that allows Python callbacks to be passed in, and then later calls those Python functions on its own non-Python-created thread pool. If this extension is to support sub-interpreters, then either during multi-phase module initialization, or when it receives the Python callback, it must record the PyInterpreterState associated with the callback. Then, in order to invoke the callback on a thread from its thread pool, it must obtain a PyThreadState for the (thread, interpreter) combination, creating one if one does not already exist. To ensure the
PyInterpreterState
pointers that it holds remain valid, it would need to register an AtExit function for the interpreter that ensures thePyInterpreterState
won't be used. ThisAtExit
function would likely need to essentially implement its own version of the "finalize block" mechanism introduced here.Given the need for external synchronization of threads when calling
Py_EndInterpreter
, it seems to me that the finalize block mechanism defined by this PR is only useful for the main interpreter.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before I dig in to responding: one key point to consider is that Python users (especially extension authors, via the C-API) only interact directly with the global runtime via a few API functions. In nearly every case they are instead interacting with the Python thread (state) or interpreter associated with the current OS thread.
Another key point is that, as of 3.12, each interpreter has its own GIL.
Finally, it is certainly possible that I've misunderstood either the problem you're trying to solve or the way you're trying to solve it or both. I'm completely willing to learn and adjust. Then again, I might be completely right too!
(Sorry for the length of this post. I genuinely want to understand and to be sure we're taking the right approach. I appreciate the work you've done and your willingness to converse.)
Now, on to responses:
PyFinalize_Ex()
finalizes the main interpreter and the global runtime. At the point it is called, no other interpreters should exist.Py_EndInterpreter()
finalizes any other interpreter, almost entirely in the same way we finalize the main interpreter.Please clarify. The only thing I see is: "All thread states associated with this interpreter are destroyed."
The behavior should be the same for all interpreters, whether via
Py_EndInterpreter()
or the main interpreter viaPy_FinalizeEx()
:Caveats:
Py_FinalizeEx()
does a few extra things at various places, but that should not relate to interpreter lifecycle.I do see that it calls
_PyThreadState_DeleteExcept()
right after step (5), whichPy_EndInterpreter()
does not do. However, that's unexpected and should probably be resolved.There are a few things we don't do in either that we probably should, probably before or right after step (5), e.g. disable the import system, disallow new threads (thread states).
Also, step (3) only applies to threads created by the threading module. We might want to extend that to all other threads states (i.e. created via
PyThreadState_New()
orPyGILState_Ensure()
).Looking at
Py_EndInterpreter()
:wait_for_thread_shutdown()
right before_PyAtExit_Call()
_PyAtExit_Call()
finalize_interp_clear()
So I'm not sure what you mean specifically.
FWIW,
Py_FinalizeEx()
is exactly the same, except currently it does that last part a little earlier with_PyThreadState_DeleteExcept()
.Only daemon threads (and, for now, threads (states) created via the C-API) would still be running at that point, and only until next step (5) above.
So are we talking about both the following?
Just to be clear, here are the ways thread states get created:
Py_Initialize*()
Py_NewInterpreter*()
PyThreadState_New()
PyGILState_Ensure()
_thread.start_new_thread()
(viathreading.Thread.start()
)At the moment, it's mostly only with that last one that we are careful during runtime/interp finalization.
It occurs to me that this PR is mostly about addressing that: dealing with other thread states in the same way we currently do threads created via the threading module. Does that sound right?
Yeah, we should probably be more deliberate about disallowing that sort of thing during finalization.
That's what the proposed change in the PR is, AFAICS. The API you're adding must be specific to each interpreter, not to the global runtime. The resources that the proposed change protects are per-interpreter resources, not global ones. So I would not expect there to be any additional API or synchronization mechanism other than what you've already proposed (except applied to each interpreter instead of the main interpreter. Otherwise users of multiple interpreters will still be subject to the problem you're trying to solve.
That's literally what
PyGILState_Ensure()
is for and does. 😄Why wouldn't we just exclusively use the mechanism you're proposed here? Why would each interpreter have to have an additional duplicate? Again, the resources we're trying to protect here are specific to each interpreter, not to the global runtime, no?
Hmm, I didn't catch what external synchonization of threads you are talking about. Sorry if I missed it or misunderstood. Please restate what you mean specifically. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any experience with implementing extensions that work with multiple interpreters, but I'm trying to think how that would be done safely.
Let's say this extension lets the user schedule a Python callback to invoked at a specific time, on a global thread pool not created by Python.
With a single interpreter, the extension may either keep a cached PyThreadState per thread in the pool for the main interpreter, or create it on demand. I haven't checked exactly what happens when trying to create a new PyThreadState while
PyFinalize_Ex
is running, but I think there is no problem there. The core problem is that the extension could attempt to dispatch a callback just asPy_FinalizeEx
is running. Using the existing finalize block mechanism in this PR, the extension can ensure that finalization does not start while a callback is being dispatched, in order to ensure threads in the thread pool won't hang trying to acquire the GIL.With multiple interpreters, we need a separate PyThreadState per thread in the pool per interpreter, and for each callback that has been scheduled, we also need to store the associated
PyInterpreterState*
. However, we also need a way to know that the interpreter is exiting, and cancel any scheduled callbacks, so that we don't attempt to use a danglingPyInterpreterState
pointer. IfPyThreadState
objects have been cached for the thread pool threads, we would also need to destroy those PyThreadState objects, to avoid violating the constraint ofPy_EndInterpreter
. This cancellation mechanism is what I mean by an external synchronization mechanism. Given this external synchronization mechanism, I don't think such an extension would need to use the "finalize block" mechanism. We can't usePyGILState_Ensure
because that does not take aPyInterpreterState*
, and even if it did, we would need a way to ensure that ourPyInterpreterState*
is not dangling.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I get it now. The "finalize block" API, as proposed, relies on the global runtime state, which is guaranteed to exist in the process, whereas a given interpreter state pointer may have already been freed.
That said, do we continually have the guarantees we might need relative to the global runtime state, since at a certain point we will have freed some of the state the proposed API would need, no? I suppose if we can rely on some final flag for already-finalized then we'd be okay.
As to interpreters, even if the target one has been finalized already, we can still know that. Interpreters may be looked up by ID, rather than referenced by pointer. It's an O(n) operation, of course, but I'm not sure that would be a huge obstacle. Likewise the pointer can be checked against the list of alive interpreters to check for validity. Would we need more than that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is safe to assume that there may be numerous race conditions still remaining in the finalization logic. In particular I'd assume that calling
Py_Initialize
again afterPy_FinalizeEx
could definitely result in a lot of problems. I had put that in the category of things not to be done in production code. At least for the single interpreter case, single call toPy_Initialize
, I think such bugs could likely be fixed without further API changes, though.I am unclear on how sub-interpreters should be handled. Checking if the
PyInterpreterState*
is valid by checking if it is in the list of alive interpreters could fail if the interpreter is freed and then another allocated again at the same address, unless something is done to prevent that. In addition, if a given C/C++ extension only finds out afterwards that an interpreter has been destroyed, it is too late for it to free any PyObjects it has, so it would likely end up leaking memory. Therefore an atexit callback would seem to be more appropriate.For the single-interpreter case we don't care about leaking memory because the program is about to exit anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there are some API questions to resolve here, one option may be to split out the change to make threads hang rather than terminate, which can go in right away, and I expect will be sufficient for almost all single-interpreter use cases. The finalize block API, or other API changes to safely support multi-threading without leaks in the presence of interpreters stopping, could then be added later without blocking the fix for the common case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed @jbms, can you make a PR splitting that part out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, see #105805
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, this should probably go in Include/cpython/pythread.h.
Also, why the leading underscore? If it's not meant to public use then put it in Include/internal/pycore_pythread.h. Otherwise either drop the leading underscore or add the "PyUnstable_" prefix. (See https://devguide.python.org/developer-workflow/c-api/#c-api, AKA PEP 689.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in #105805