-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passing priors to sampler #94
Comments
I'm fine with both options. Though one remark. Why would we generate the samples from the initial distribution in the sampler (and not outside as done in the current implementation of Alex). This way, we would just have to pass the samples to the sampler (which are generated by creating a scipy instantiation of the prior distributions for sampling) - and potentially the bounds, but I guess in all subsequent steps of the adaptive sampler this initial pdf is not used anymore? |
If the samplers don't use the pdf information, then passing the samples is definitely a better option. @JanKoune can probably answer that better. From what I am seeing in |
For LHS, this should usually consider the pdf. Only in the case of a uniform distribution, this is identical with the bounds, e.g. for normal distributions with independent variables you would decompose the sample space into equally probable subintervals (by using the inverse cdf) and then sample in that range (or take the center of the interval). Thus, if the pdf is not used, then this is not a pure LHS of the prior but using a uniform distribution using the bounds. Both options are fine, using the pdf creates more samples close to the mean, whereas the uniform sampling gives a better (equally spaced) covering of the domain. |
Indeed none of the samplers in Harlow use the distributions of the parameters, and only need the bounds. The harlow implementation of LHS currently assumes uniform distributions for the parameters as it was meant to be used for creating a global surrogate. For our purposes it would be fine to simply pass the initial samples to the sampler. |
I open this issue to discuss the best way of passing the priors to the sampler and avoid having to pass the whole inverse problem. From what I understand, the prior is defined in the parameter initialization, as well as the domain. The prior itself is an object of class
PriorBase
, whose derived classes implement a method to sample it. Right now, all the priors are defined to be SciPy priors, and thegenerate_samples
method is implemented in the classPrior(PriorBase)
for SciPy. Thisgenerate_function
just calls the underlaying distribution, that is of classProbabilityDistribution
and implementes a___call___
that evaluates the distribution. I see two options:For our purposes I cannot find many differences between the two approaches. Probably I would favour the second one for cosuming less memory, unless there are some problems that I am not seeing right now.
The text was updated successfully, but these errors were encountered: