Skip to content

Commit

Permalink
Fix for pymc-devs#3346. draw_values had a statement that said that if…
Browse files Browse the repository at this point in the history
… the name of a variable was in the supplied point, then the value for said variable would be taken from the point dictionary. This was too permissive, and different from what was done in _draw_value. There, the value could be taken from the point dictionary only if the variable had the attribute model, i.e. it was an RV instance instead of a Deterministic.
  • Loading branch information
lucianopaz committed Jan 24, 2019
1 parent f245f11 commit 8f70acc
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 2 deletions.
1 change: 1 addition & 0 deletions RELEASE-NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
- Added the `broadcast_distribution_samples` function that helps broadcasting arrays of drawn samples, taking into account the requested `size` and the inferred distribution shape. This sometimes is needed by distributions that call several `rvs` separately within their `random` method, such as the `ZeroInflatedPoisson` (Fix issue #3310).
- The `Wald`, `Kumaraswamy`, `LogNormal`, `Pareto`, `Cauchy`, `HalfCauchy`, `Weibull` and `ExGaussian` distributions `random` method used a hidden `_random` function that was written with scalars in mind. This could potentially lead to artificial correlations between random draws. Added shape guards and broadcasting of the distribution samples to prevent this (Similar to issue #3310).
- Added a fix to allow the imputation of single missing values of observed data, which previously would fail (Fix issue #3122).
- Fix for #3346. The `draw_values` function was too permissive with what could be grabbed from inside `point`, which lead to an error when sampling posterior predictives of variables that depended on shared variables that had changed their shape after `pm.sample()` had been called.

### Deprecations

Expand Down
4 changes: 2 additions & 2 deletions pymc3/distributions/distribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,7 @@ def draw_values(params, point=None, size=None):
# param was drawn in related contexts
v = drawn[(p, size)]
evaluated[i] = v
elif name is not None and name in point:
elif name is not None and hasattr(p, 'model') and name in point:
# param.name is in point
v = point[name]
evaluated[i] = drawn[(p, size)] = v
Expand Down Expand Up @@ -487,7 +487,7 @@ def _draw_value(param, point=None, givens=None, size=None):

dist_tmp.shape = distshape
try:
dist_tmp.random(point=point, size=size)
return dist_tmp.random(point=point, size=size)
except (ValueError, TypeError):
# reset shape to account for shape changes
# with theano.shared inputs
Expand Down
26 changes: 26 additions & 0 deletions pymc3/tests/test_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,32 @@ def test_model_not_drawable_prior(self):
samples = pm.sample_posterior_predictive(trace, 50)
assert samples['foo'].shape == (50, 200)

def test_model_shared_variable(self):
x = np.random.randn(100)
y = x > 0
x_shared = theano.shared(x)
y_shared = theano.shared(y)
with pm.Model() as model:
coeff = pm.Normal('x', mu=0, sd=1)
logistic = pm.Deterministic('p', pm.math.sigmoid(coeff * x_shared))

obs = pm.Bernoulli('obs', p=logistic, observed=y_shared)
trace = pm.sample(100)

x_shared.set_value([-1, 0, 1.])
y_shared.set_value([0, 0, 0])

samples = 100
with model:
post_pred = pm.sample_posterior_predictive(trace,
samples=samples,
vars=[logistic, obs])

expected_p = np.array([logistic.eval({coeff: val})
for val in trace['x'][:samples]])
assert post_pred['obs'].shape == (samples, 3)
assert np.allclose(post_pred['p'], expected_p)


class TestSamplePPCW(SeededTest):
def test_sample_posterior_predictive_w(self):
Expand Down

0 comments on commit 8f70acc

Please sign in to comment.