Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Py::drop_ref #3870

Closed
juntyr opened this issue Feb 19, 2024 · 2 comments · Fixed by #3871
Closed

Py::drop_ref #3870

juntyr opened this issue Feb 19, 2024 · 2 comments · Fixed by #3871

Comments

@juntyr
Copy link
Contributor

juntyr commented Feb 19, 2024

The documentation for Py currently states that

If a Py<T> is dropped while its thread happens to be holding the GIL then the Python reference count will be decreased immediately. Otherwise, the reference count will be decreased the next time the GIL is reacquired.

(https://docs.rs/pyo3/latest/pyo3/prelude/struct.Py.html#a-note-on-python-reference-counts)

However, Py's drop impl seems to unconditionally call gil::register_decref to defer the decref even when the GIL is currently held (https://docs.rs/pyo3/latest/src/pyo3/instance.rs.html#1010-1014). This may be needed for performance but makes explicit memory management difficult.

EDIT (thanks @adamreichold): Py's drop impl calls gil::register_decref which checks if the GIL is held to execute the decref immediately or defer it, which can be expensive.

In parallel to the explicit Py::clone_ref, I propose adding the following Py::drop_ref method that can be used to explicitly drop a Py if we already hold the GIL:

impl<T> Py<T> {
    pub fn drop_ref(self, py: Python) {
        let _py = py;

        // Safety: we hold the GIL and forget `self` to not trigger a double free
        unsafe { pyo3::ffi::Py_DECREF(self.into_ptr()) };
    }
}
@adamreichold
Copy link
Member

adamreichold commented Feb 19, 2024

However, Py's drop impl seems to unconditionally call gil::register_decref to defer the decref even when the GIL is currently held (https://docs.rs/pyo3/latest/src/pyo3/instance.rs.html#1010-1014). This may be needed for performance but makes explicit memory management difficult.

Note that gil::register_decref does check if the GIL is held and if so, immediately applies the change, c.f.

pyo3/src/gil.rs

Lines 454 to 460 in 0bb9cab

pub unsafe fn register_decref(obj: NonNull<ffi::PyObject>) {
if gil_is_acquired() {
ffi::Py_DECREF(obj.as_ptr())
} else {
POOL.register_decref(obj);
}
}

i.e. it does not unconditionally queue up the change in the global REFERENCE_POOL and only applies it on the next acquisition of the GIL and hence managing memory via Py's impl of Drop should already be deterministic.

The drop_ref method you propose would then be a micro-optimization, exactly similar to clone_ref, that mainly avoids calling gil_is_acquired and hence the access to thread-local storage this implies.

Would not open a pull request proposing that addition?

@juntyr
Copy link
Contributor Author

juntyr commented Feb 19, 2024

However, Py's drop impl seems to unconditionally call gil::register_decref to defer the decref even when the GIL is currently held (https://docs.rs/pyo3/latest/src/pyo3/instance.rs.html#1010-1014). This may be needed for performance but makes explicit memory management difficult.

Note that gil::register_decref does check if the GIL is held and if so, immediately applies the change, c.f.

pyo3/src/gil.rs

Lines 454 to 460 in 0bb9cab

pub unsafe fn register_decref(obj: NonNull<ffi::PyObject>) {
if gil_is_acquired() {
ffi::Py_DECREF(obj.as_ptr())
} else {
POOL.register_decref(obj);
}
}

i.e. it does not unconditionally queue up the change in the global REFERENCE_POOL and only applies it on the next acquisition of the GIL and hence managing memory via Py's impl of Drop should already be deterministic.

Woops, my mistake - thanks for the correction :)

I'll submit a PR for the micro-optimisation method still.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants