Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add API for getting a bool with chance of exactly 1-in-10 or 2-in-3 #491

Merged
merged 1 commit into from
Jun 9, 2018

Conversation

sicking
Copy link
Contributor

@sicking sicking commented Jun 3, 2018

The gen_weighted_bool function is being deprecated in 0.5 since it's not very flexible and can only generate a bool with ratios 1-in-N.

However the gen_bool replacement has the problem that it can't generate bools with exact ratios. So for example rng.gen_bool(0.1) won't return true exactly 10% of the time, since an f64 can't express exactly the value 0.1. Additionally since the implementation of Bernoulli introduce further bias in the multiplication with u64::MAX and rounding to u64.

Another way to look at it is that since the implementation of Bernoulli generate an u64 without rejecting any values, it's impossible to generate an unbiased 1/10 distribution.

Same thing with rng.gen_bool(2/3) or any other ratio.

This PR adds a rng.gen_ratio(numerator, denominator) function, which take two numbers and generate boolean which is true exactly numerator/denominator of the time. Or at least as exactly as our Uniform implementation is for the given type. So rng.gen_ratio(2u8, 3u8) will be unbiased, whereas rng.gen_ratio(2f32, 3f32) will suffer some rounding errors.

This PR also adds Bernoulli::new_ratio which creates a Distribution<bool> with the same property.

I'm not a particularly big fan of the name gen_ratio and new_ratio. Happy for better suggestions.

@sicking
Copy link
Contributor Author

sicking commented Jun 3, 2018

One thing that really confused me in this patch is why the existing code below compiles:

#[derive(Clone, Copy, Debug)]
pub struct Uniform<X: SampleUniform> {
    inner: X::Sampler,
}

pub trait SampleUniform: Sized {
    type Sampler: UniformSampler<X = Self>;
}

pub trait UniformSampler: Sized {
    ...
}

How can Uniform implement Clone, Copy, and Debug without putting any requirements on that the UniformSampler trait implements any of those traits?

As soon as I did

#[derive(Clone, Copy, Debug)]
pub struct Bernoulli<T: SampleUniform> {
    /// Probability of success, relative to the maximal integer.
    distribution: Uniform<T>,
    limit: T,
}

rustc started complaining that Uniform<T> might not implement the needed traits, even though it doesn't complain about the same thing with the X::Sampler inside Uniform.

@dhardy
Copy link
Member

dhardy commented Jun 3, 2018

However the gen_bool replacement has the problem that it can't generate bools with exact ratios.

Bias due to FP rounding is actually very small, something like 1 in 2^53 (so effect of the bias is only visible with more than 2^53 samples).

Additionally since the implementation of Bernoulli introduce further bias in the multiplication with u64::MAX and rounding to u64.

This isn't bias. It's supposed to round like that. I just ended up being the easiest way to get the constant we wanted.


#[derive(..)] is a weird beast; it will implement those traits only where generic type parameters allow it. For example, UniformInt supports Copy etc. The compiler might be complaining that the traits don't get implemented at an actual use site, not because of derive?

@sicking
Copy link
Contributor Author

sicking commented Jun 3, 2018

Additionally since the implementation of Bernoulli introduce further bias in the multiplication with u64::MAX and rounding to u64.

This isn't bias. It's supposed to round like that. I just ended up being the easiest way to get the constant we wanted.

Sorry, I think I didn't express that very clearly. What I mean is that Bernoulli ultimately generate a value in the u64 range, and then map values under some N to true and other values to false. This will inherently have some bias since the number of values a u64 can express isn't divisible by 10. So even without rounding issues in floats, we'll always end up with a biased result.

Isn't this why Uniform calculates a rejection zone? If we didn't care about small biases couldn't we optimize Uniform<u8> possibly up to Uniform<u32>?

#[derive(..)] is a weird beast; it will implement those traits only where generic type parameters allow it. For example, UniformInt supports Copy etc.

Interesting! I'll see if I can take advantage of that.

@sicking
Copy link
Contributor Author

sicking commented Jun 3, 2018

Doing

pub struct Bernoulli<T: SampleUniform + PartialOrd> {
    /// Probability of success, relative to the maximal integer.
    distribution: Uniform<T>,
    limit: T,
}

just gets me:

error[E0204]: the trait `Copy` may not be implemented for this type
  --> src/distributions/bernoulli.rs:35:17
   |
35 | #[derive(Clone, Copy, Debug)]
   |                 ^^^^
...
38 |     distribution: Uniform<T>,
   |     ------------------------ this field does not implement `Copy`

@sicking
Copy link
Contributor Author

sicking commented Jun 3, 2018

Ah, but it does work if I do:

#[derive(Clone, Copy, Debug)]
pub struct Bernoulli<T: SampleUniform + PartialOrd> {
    /// Probability of success, relative to the maximal integer.
    distribution: T::Sampler,
    limit: T,
}

No idea why though?

Happy to do that if that's preferred?

@dhardy
Copy link
Member

dhardy commented Jun 3, 2018

Integer sampling can have huge biases without a rejection zone when the range being sampled is close (within an order of magnitude) of the sample space range. Granted, both integer and FP arithmetic is binary (thus cannot represent fractions perfectly when the denominator is not a power of 2), but the biases are typically small and (especially with 64-bit) hard to observe.

This isn't to say we shouldn't add exact fractional sampling, but it does significantly reduce the motivation.

@sicking
Copy link
Contributor Author

sicking commented Jun 3, 2018

I guess I don't understand why we then bother with rejection zones for types u8, u16 and u32 given that most RngCore implementations generate an u64 as fast as an u32? That seems like it'd save both zone calculations, as well as branches for checking if we're in the zone.

@dhardy
Copy link
Member

dhardy commented Jun 3, 2018

This is true, but I guess you mis-read the code: the u8 and u16 implementations both sample from u32 numbers.

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

I feel like we're talking past each other. What I'm suggesting, if we don't care about small biases, is something like this for u8/i8/u16/i16:

fn sample_single(low: u8, high: u8, rng: &mut R) -> u8 {
    let range = (high - low) as u32;
    let v: u32 = r.gen();
    (v % range) as u8 + low 
}

and for u32/i32:

fn sample_single(low: u32, high: u32, rng: &mut R) -> u32 {
    let range = (high - low) as u64;
    let v: u64 = r.gen();
    (v % range) as u32 + low 
}

Potentially the last line could be written as v.wmul(range).0 as $ty + low which would save a couple of more cycles, but I haven't analyzed the math for that approach enough to fully understand how it biases.

Either way, this is notably faster than both the current sample_single and sample implementations.

Based on my reading of the code, I thought the whole point of the zone calculation, and the Uniform distribution for integers in order to cache it, was to avoid even minimal biases. Possibly except for u64 and u128 where the biases indeed could get big without further tricks.

Is the purpose something else?

@dhardy
Copy link
Member

dhardy commented Jun 4, 2018

You're right @sicking; this comes down to whether we care about small biases or not. To be honest I never thought seriously about sampling small integer ranges like that which is also part of the reason.

In your example to sample u8 via u32, without going through the maths, I think the maximum possible bias is just under 1 in 2^(32-8), or approximately 16 million — i.e. approximately one observable error per 16 million experiments. So how much bias is acceptable?

One thing to bear in mind is the limits of current computers — approximately 2^32 cycles per second with about 2^25 seconds per year, i.e. 2^57 cycles per core per year. This is far larger than 2^24, though admittedly also larger than 2^53, so I guess you are correct that we should care about that bias.

As for the approach, I'm wondering why you choose to make Bernoulli support both FP and fractional probabilities? Perhaps it does make sense, but why didn't you just add something like this?

fn gen_ratio(&mut self, num: u64, denom: u64) -> bool {
    assert!(denom >= 1);
    num > self.gen_range(0, denom)
}

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

One thing to bear in mind is the limits of current computers.

I totally agree that the biases here are really small. I don't have a strong opinion about how big biases we should worry about. I just think we should be consistent. Since we seemed to care about the bias elsewhere I was surprised that we wouldn't for bool-generation as well.

But I also think there's a place for supporting slightly biased, perf-optimized, sampling. As @pitdicker suggests here.

I was planning in exploring that elsewhere. It's somewhat tricky since simply adding new Distribution implementations doesn't enable, for example, support shuffling and other sequence operations without worrying about small biases. I don't know where a good API would look like, and what's worth supporting in this space.

As for the approach, I'm wondering why you choose to make Bernoulli support both FP and fractional probabilities? Perhaps it does make sense, but why didn't you just add something like this?

The code that you've posted is exactly what's in the PR. Minus the assert since gen_range will assert that already.

I added Bernoulli mainly for completeness. I.e. since we support bool-generation as a Distribution, it seems sensible to support unbiased bool-generation as a Distribution.

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

Oh, I guess your gen_ratio explicitly uses u64, rather than anything that implements SampleUniform. I guess I don't have a strong opinion. Consistency with gen_range was my thinking.

Supporting any SampleUniform doesn't require the Bernoulli changes. Those are entirely orthogonal.

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

Also, to be clear, the idea of this API wasn't just to fix bias issues. It was also to have a nice-to-use API for cases where you want specifically a bool with a N-in-M chance. I think it's likely common to think of an event as a chance as "2 out of 3" rather than p=0.6667, especially when dealing discreet events like dice rolls or card draws. Possibly that was why we ended up with a discreet API like gen_weighted_bool before we got gen_bool.

So having an API that reflects that made sense to me.

But the ability to remove bias is certainly also a goal.

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

In your example to sample u8 via u32, without going through the maths, I think the maximum possible bias is just under 1 in 2^(32-8), or approximately 16 million.

Actually, the bias is just under 1 in (2^32 - 2^8). I.e. about one in 4 billion.

@pitdicker
Copy link
Contributor

I like the idea of an extra new_ratio constructor for the Bernoulli distribution! Seems like a convenient method for several use cases.

But I think the idea that gen_bool etc. are biased is wrong, and @dhardy already commented on that. While any value/ration that can not be represented as a f64 float will have some theoretical bias, it is not detectable in practise. That would require an impractically large number of runs to detect. We especially should be careful to not suggest gen_bool or Bernoulli::new are better avoided in the documentation, or at least make sure not to confuse users.

Can an implementation of Bernoulli::new_ration use exactly the same as sampling as the current Bernoulli::new? I.e. just calculate a 'cutoff point' as a u64 value? I don't think it makes sense to support integer sized > 32 bits, practical ratios will be small, and I don't see smaller integer sizes as having much value either. So no need to make it generic.

    pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli;

But I also think there's a place for supporting slightly biased, perf-optimized, sampling. As @pitdicker suggests here.

For small integers, and without using the modulus method, sampling from ranges with a tiny bias but better performance is for many a good trade-off. Still seems useful to me, but needs some careful documentation that details when it can be, or should not be, used.

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

I agree that detecting a bias in gen_bool is basically impossible. But isn't that equally true if we use a naive implementation for Uniform<u8/u16/u32>?

Can an implementation of Bernoulli::new_ration use exactly the same as sampling as the current Bernoulli::new?

Sure, simply doing

    pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
        Self::new(numerator as f64 / denominator as f64)
    }

would work.

@sicking
Copy link
Contributor Author

sicking commented Jun 4, 2018

Pushed a patch which is more in line with what it sounds like people are thinking.

I left Bernoulli implemented as on master, and just added a simple

    pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
        Self::new(numerator as f64 / denominator as f64)
    }

I also change the signature of Rng.new_ratio to simply gen_ratio(&mut self, numerator: u32, denominator: u32) -> bool

As for implementing Rng.new_ratio, there are basically three ways that I can think of. Implementations with perf numbers below:

  • Using self.gen_range(0, denominator) > numerator:
    test misc_gen_ratio_const            ... bench:       6,193 ns/iter (+/- 318)
    test misc_gen_ratio_var              ... bench:       8,966 ns/iter (+/- 181)
    
  • Using (numerator as f64 / denominator as f64) > self.gen():
    test misc_gen_ratio_const            ... bench:       4,276 ns/iter (+/- 260)
    test misc_gen_ratio_var              ... bench:       4,572 ns/iter (+/- 36)
    
  • Using self.sample(distributions::Bernoulli::new_ratio(numerator, denominator))
    test misc_gen_ratio_const            ... bench:       4,308 ns/iter (+/- 138)
    test misc_gen_ratio_var              ... bench:       6,430 ns/iter (+/- 155)
    

So seems pretty clear that we should go with the middle approach.

I also looked at implementing Rng.gen_bool(p) using p > self.gen(). In some of my test runs that made misc_gen_ratio_const go faster, in some of them it made it go slower.

However it always made misc_gen_ratio_var go faster. Ideally misc_gen_ratio_var is the more common use case, since if you have a constant p and care about perf, you should sample from a Bernoulli.

For either method I'm happy to go with whatever implementation approach though.

@dhardy
Copy link
Member

dhardy commented Jun 5, 2018

I don't see much point adding a method that just does (numerator as f64 / denominator as f64) > self.gen() (especially the Bernoulli constructor), but ...

I opened #494 to discuss this type of compromise, but I don't really have an answer.

@sicking
Copy link
Contributor Author

sicking commented Jun 5, 2018

I see it both as a ease-of-use thing and as something that gives us the flexibility to change the implementation if a better strategy becomes available. For example if we decide that a slightly biased gen_range() is ok, then that might well be the faster way to implement.

I also think it's helpful in cases where developers don't realize that the implementation is as simple as it is, or where it's not obvious what the fastest implementation strategy is.

You could similarly make the argument that gen_bool is also not needed given that the implementation is just p < self.gen(). And the standard library similarly has both Option.is_none() and Option.is_some(), even though one just calls the other.

But adding convenience functions definitely needs to be weighed against the cost of having a larger API. I wouldn't think that cost is worth it if it wasn't for that I think that getting a bool with a M-in-N chance is very common.

The whole reason that I got interested in contributing to rand was that when I started coding Rust and needed random numbers, the rand 0.3 API felt clunky enough that I just opted for writing my own rng suited to my needs. The API now is certainly a lot better than it was in 0.3, and I'm sure I was affected by being a rust beginner and wasn't as familiar with idiomatic rust.

However I still feel like we can make the API easier to use. Which I know doesn't mean add any conceivable API and hope that it gets used (in fact, my first PR was to remove API). But I do think it'll involve adding convenient APIs for the most common operations. gen_ratio and gen_below fall into that category to me.

@dhardy
Copy link
Member

dhardy commented Jun 6, 2018

We already had some long discussions on gen_bool: #411, #347, #320, #308. I don't think we should change it again here (though #494 may give us a new take on it). Your version actually reduces precision, and @vks argues that we should use the same implementation as Bernoulli.

I am tentatively in favour of the other changes (adding Rng::gen_ratio and Bernoulli::new_ratio), though:

  • Do we want only u32 type?
  • Is Bernoulli::from_ratio better?
  • Do we actually want the additional Bernoulli constructor anyway? The range-based implementation you wrote previously was very nice, but I'm not sure we'd want to add it because it makes the Bernoulli type larger for f64-based uses too; i.e. if we wanted to have a high-accuracy version it might be better to use a separate type (e.g. BernoulliRatio). The simpler alternative of only adding Rng::gen_ratio may be the better option for now.

@sicking
Copy link
Contributor Author

sicking commented Jun 6, 2018

Do we want only u32 type?

I don't have a strong opinion. I originally used a generic implementation, but changed it based on @pitdicker's comment. In practice I would expect the most common use case will be to call with constants, like let rolled_a_six = rng.gen_ratio(1, 6).

But I'm happy to go either way.

Is Bernoulli::from_ratio better?

Yes!

Unless there's a strong convention to use new_X as name for constructors in rust? But I guess we already have SeedableRng::from_seed and SeedableRng::from_rng, so yeah Bernoulli::from_ratio seems much nicer.

Do we actually want the additional Bernoulli constructor anyway?

I think the added usability is worth it. But it's obviously a judgement call.

I don't have a strong opinion about implementation strategy unless we decide that we care about even minimal biases. I don't know that the size matters much given how good rust is at keeping things on the stack or inline with other objects. But the new approach is certainly both simpler and faster, just not as exact.

Copy link
Member

@dhardy dhardy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I think we could add this, but a few changes are needed to the PR as mentioned.

Also please clean up the commits: rebase and squash (or drop) a few commits. We don't need to implement then immediately remove some functionality again. (You may want to preserve your other implementation of Bernoulli::new_ratio in a separate branch, though I don't think it's likely we'll use it.)

///
/// # Panics
///
/// If `denominator` == 0 or `numerator` > `denominator`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quotes around the whole comparison please (denominator == 0).

src/lib.rs Outdated
let d = distributions::Bernoulli::new(p);
self.sample(d)
fn gen_ratio(&mut self, numerator: u32, denominator: u32) -> bool {
self.gen_bool(numerator as f64 / denominator as f64)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless there's a good reason for this, I think we should just wrap Bernoulli::from_ratio instead.

src/lib.rs Outdated
@@ -533,16 +535,34 @@ pub trait Rng: RngCore {
/// let mut rng = thread_rng();
/// println!("{}", rng.gen_bool(1.0 / 3.0));
/// ```
#[inline]
fn gen_bool(&mut self, p: f64) -> bool {
assert!(0.0 <= p && p <= 1.0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned, please don't change this function in this PR. If you really think it needs changing make a new PR/issue, because it's a significant discussion point on its own.

@dhardy dhardy added F-new-int Functionality: new, within Rand T-distributions labels Jun 7, 2018
#[inline]
pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
Self::new(numerator as f64 / denominator as f64)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you change this a bit it can improve performance:

    pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
        assert!(numerator <= denominator);
        if numerator == denominator {
            return Bernoulli { p_int: ::core::u64::MAX }
        }
        const SCALE: f64 = 2.0 * (1u64 << 63) as f64;
        let p_int = ((numerator as f64 / denominator as f64) * SCALE) as u64;
        Bernoulli { p_int }
    }

Before (with Rng::gen_ratio changed to use Bernoulli:new_ratio):

test misc_gen_ratio_const            ... bench:       3,961 ns/iter (+/- 2)
test misc_gen_ratio_var              ... bench:       8,383 ns/iter (+/- 31)

After:

test misc_gen_ratio_const            ... bench:       3,980 ns/iter (+/- 9)
test misc_gen_ratio_var              ... bench:       7,351 ns/iter (+/- 84)

It would be nice if there was a method to calculate the 'cutoff integer' more accurately than converting to floats and back, but I can't think of anything that is not 15× slower.

benches/misc.rs Outdated
b.iter(|| {
let mut accum = true;
for i in 2..(::RAND_BENCH_N as u32 + 2) {
accum ^= rng.gen_ratio(1, i);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good benchmark 👍. In theory the multiplication by the numerator can be optimized out. What do you think of

        for i in 1..(::RAND_BENCH_N as u32 + 1) {
            accum ^= rng.gen_ratio(i, i + 1);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@@ -401,7 +401,7 @@ pub trait Rng: RngCore {
/// ```
///
/// [`Uniform`]: distributions/uniform/struct.Uniform.html
fn gen_range<T: PartialOrd + SampleUniform>(&mut self, low: T, high: T) -> T {
fn gen_range<T: SampleUniform>(&mut self, low: T, high: T) -> T {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

src/lib.rs Outdated
fn gen_weighted_bool(&mut self, n: u32) -> bool {
// Short-circuit after `n <= 1` to avoid panic in `gen_range`
n <= 1 || self.gen_range(0, n) == 0
// Short-circuit after `n <= 1` to avoid panic in `gen_ratio`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is not correct with the change. But maybe it is best to not change this deprecated method (also it will be removed soon #499).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see what's incorrect about the comment, but I'll leave this function unchanged.

@pitdicker
Copy link
Contributor

Do we want only u32 type?

A u64 type allows many ratios as input that we can't really satisfy exactly, and I also don't see many uses for large ratios. Especially as there is also Bernoulli::new and other alternatives such as numerator < rng.gen_range(denominator). And what advantage would smaller types give, that would make it worth the extra complexity?

For the rest I agree with the remaining comments of @dhardy 😄.

@pitdicker
Copy link
Contributor

@dhardy I think it is nice to merge this PR first, and I'll rebase #500.

One more idea. Not really sure about it.

    #[inline]
    pub fn new_ratio(numerator: u32, denominator: u32) -> Bernoulli {
        #[inline]
        fn div_rem(num: u64, divisor: u64) -> (u64, u64) {
            (num / divisor, num % divisor)
        }

        assert!(numerator <= denominator);
        if numerator == denominator {
            return Bernoulli { p_int: ::core::u64::MAX }
        }

        let num = numerator as u64;
        let denom = denominator as u64;
        // Because we can't do `div = 2^64 / denom`, we do
        // `div = (2^64 - denom) / denom + 1`.
        let (div, rem) = div_rem(0u64.wrapping_sub(denom), denom);
        let p_int = (div + 1) * num + rem * num / denom;
        Bernoulli { p_int }
    }

This makes misc_gen_ratio_var about 2.3× slower, but should be accurate for 63 bits (or 64 bit with an extra addition). It doesn't influence misc_gen_ratio_const.

@sicking It seems to me from_ratio would be used mostly with constants. Although it doesn't really bring benefits, we can get the extra accuracy for free for constants. Also there are easy alternatives if misc_gen_ratio_var turns out to be a bottleneck. Do you think it is worth it?

@sicking
Copy link
Contributor Author

sicking commented Jun 8, 2018

I'll have to check the math on that code in the morning. But I assume you're correct about the precision :).

I don't think the added precision is worth making gen_ratio 2.3x slower. Even if it's just in the var case. I think it's useful in a number of scenarios with variable arguments. For example inIteratorRandom::choose here.

But that's pretty easily fixable by using the ctor you are proposing, but making Rng::gen_ratio use one of the other implementation strategies suggested above.

Copy link
Member

@dhardy dhardy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, will read again but after the doc change I think we can finally merge this!

@@ -523,8 +523,6 @@ pub trait Rng: RngCore {

/// Return a bool with a probability `p` of being true.
///
/// This is a wrapper around [`distributions::Bernoulli`].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please at least mention this in the doc; same with gen_ratio below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a comment, so I'll add this back if really desired. However I don't think it's a good idea to promise a particular implementation strategy. That just seems like complicate future arguments regarding if changing the implementation is a breaking change or not.

If we want a pointer to Bernoulli, maybe do what we do for gen_range and say: See also the [Bernoulli] distribution type which may be faster if sampling with the same probability repeatedly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, just a pointer to Bernoulli is sufficient. You're right that we don't need to specify this constraint in the doc, though I don't see it as a big deal.

@dhardy dhardy merged commit 7ae36ee into rust-random:master Jun 9, 2018
@sicking sicking deleted the gen_ratio branch June 9, 2018 18:39
@dhardy dhardy mentioned this pull request Jun 20, 2018
28 tasks
@dhardy dhardy added this to the 0.6 release milestone Aug 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
F-new-int Functionality: new, within Rand
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants