-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make HMM learning work with variable length time series #99
Comments
See also #50 |
Just commenting here to note that this request (or variants of it) has come up multiple times in the past few weeks. A simple change would be to make the low level inference code allow missing data, and then update the model based code when time allows. The HMM inference code is simple enough: you can indicate missing data by passing zeros to the corresponding rows of |
If we pass the valid length off each sequence, we can lax.scan only over that prefix. |
I actually have a fork of dynamax that handles missingness for the EKF (as well as allow time varying transitions and emissions): KeAWang@a991219. Though it's not for the HMM, I'm happy to open a PR for it |
Sure, that would be great Alex! |
Hey, just seeing this library and thread and wonder how close this is to being ready. Our group works a lot with sparse annotations and it would be great to be able to generate hmm models that can work with missing data (both on the training side and the filter side). |
I don't think the current
hmm_fit_sgd
function is using the length of the time series as we hoped. At least with the default loss function, the length is just scaling the loss. Really, we need to changemarginal_log_prob
to only compute the log probability of observation up to the specified length.Following up on our slack conversation, I see two ways of doing that:
_conditional_logliks
to put zeros wherever the emission is nan. That way thehmm_filter
will still compute the marginal log prob of just the observed data. I think trick should also leave thehmm_smoother
computations unchanged. A cool added benefit of this is it would allow us to interpolate over chunks of missing data.It would look like this:
I tested this out and the only problem is that we can't take gradients back through this function wrt model parameters. They nan out because one of the paths through the
where
is nan. See jax-ml/jax#1052.There's a somewhat clunky fix, which is to find the nan's first, replace them with a default value of the emissions, compute the log likelihoods, and then zero out the entries that were originally nan. That would look something like this:
It's not the prettiest, but it works.
hmm_filter
. Then those functions would need to use a while loop to dynamically stop the message passing once the length has been reached. (I tried implementing this by calling filter on a dynamic slice of the data, but JAX barfed on that...) This approach is totally doable, but it would lead to lots of extra logic in the inference code.I'm working on a demo of approach 1 right now. Will keep you posted!
The text was updated successfully, but these errors were encountered: