Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closes #3051 - logp numpy array input fixed #3836

Merged
merged 12 commits into from
May 5, 2020
6 changes: 3 additions & 3 deletions pymc3/distributions/multivariate.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
from theano.tensor.slinalg import Cholesky
import pymc3 as pm

from pymc3.theanof import floatX
from pymc3.theanof import floatX, intX
from . import transforms
from pymc3.util import get_variable_name
from .distribution import (Continuous, Discrete, draw_values, generate_samples,
Expand Down Expand Up @@ -327,7 +327,7 @@ def logp(self, value):
TensorVariable
"""
quaddist, logdet, ok = self._quaddist(value)
k = value.shape[-1].astype(theano.config.floatX)
k = intX(value.shape[-1]).astype(theano.config.floatX)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want k to become floatX, you can use floatX(…) directly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since, value.shape[-1] gives out an int value, I was trying to use intX to convert it to an array form, so that the error can be fixed, but I get your point, we can directly do, k=theano.config.floatX(value.shape[-1]).
Also, do you mean adding a test for testing this bug^ ? I'll do that right away.

norm = - 0.5 * k * pm.floatX(np.log(2 * np.pi))
return bound(norm - 0.5 * quaddist - logdet, ok)

Expand Down Expand Up @@ -441,7 +441,7 @@ def logp(self, value):
TensorVariable
"""
quaddist, logdet, ok = self._quaddist(value)
k = value.shape[-1].astype(theano.config.floatX)
k = intX(value.shape[-1]).astype(theano.config.floatX)

norm = (gammaln((self.nu + k) / 2.)
- gammaln(self.nu / 2.)
Expand Down