Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

first draft of Binomial.icdf issue #6612 - PyData 2024 Hackathon #7362

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions pymc/distributions/discrete.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,10 @@ def logcdf(value, n, p):
msg="n >= 0, 0 <= p <= 1",
)

def icdf(value, n, p):
cdf_vals = Binomial.logcdf(pt.arange(0, n), n, p)
return pt.argmax(cdf_vals > pt.log(value))
Comment on lines +177 to +178
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is clever but widely inefficient for large n?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also would probably need to pass axis=-1 to argmax

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are totally right, it is slow for large n as it is in O(n). I did actually flag this point at the hackathon and would agree that for that reason this is an ad-hoc implementation, but was encouraged to submit it anyway. ;) If you feel it is too slow for production I'm totally fine if you reject this PR.

I would note though that for very large n, one is usually better off using a normal approximation instead anyway.

And on the upside: This trick can be used to quickly get an implementation for the icdf of any discrete distribution, so one idea we had at the event was to use that as a first draft and make it quicker later if needed. One might be able to get the complexity down to O(log(n)) one replaces this with some sort of bisection.

Having a fast icdf just for Binomial might require quite a bit more effort, e.g. some numerical root finder of the CDF (which for Binomial is the regularized incomplete beta function). So I think the options are:

  1. take this idea as a draft and reject for production use (maybe keep in mind using for unit tests)
  2. take this idea and accept that it is slow for now
  3. try to tweak to O(log(n))
  4. try to lift and shift a more tailored approach based on beta function (will probably require a bit higher effort)

Of course open to ideas if you see another option? ;-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation!

I think we should try 3/4, and we can use this approach to obtain and independent reference for testing?

Perhaps the binary search wouldn't be too hard to implement with a ScalarLoop (so it is trivial to vectorize as an Elemwise operation)?

https://github.com/pymc-devs/pytensor/blob/main/pytensor/scalar/loop.py

The requirement is that the CDF expression be composed only of Scalar (or Elemwise in the batched version) operations.

The code may be a bit daunting but here are some cases where we use it: https://github.com/pymc-devs/pytensor/blob/efa845a3484915e4e15a928918fa97d081886d50/pytensor/scalar/math.py#L870

Or tests that may be more readable: https://github.com/pymc-devs/pytensor/blob/main/tests/scalar/test_loop.py



class BetaBinomial(Discrete):
R"""
Expand Down
Loading