-
Notifications
You must be signed in to change notification settings - Fork 17
Conversation
This takes a Turing model and performs quadratic approximation, which means finding the maximum of the loglikelihood (or the minimum of the NLL) and taking the hessian to find the variance and covariance of the parameters. The goal is to make it just work™ which is why I put some tricks in there like sampling from the prior to find a starting point and chaining optimizers to make sure it finds a good optimum. I tried to put some checks in there, still, there are probably many ways to make this break. On the simple model in the file it takes ~5ms on my computer, so I didn't take much time to make it any faster. Since I consider this very experimental, there is still a list of things that need to be done: - Importing the file - Add dependencies - Tests and testing this with more than one or two models - Docstrings - Formatting? For example, in the existing code I have found some 2 spaces, 4 spaces and tabs. I think it would be good to unify this. - Add more features like calling `precis`
@karajan9 I definitely like this approach. Do you think (know if) it is possible to get hold of the mu & sigma in the prior and simply use those as start values for quap? E.g.:
It would be nice if we could have just a single Turing model definition instead of having the Turing model version augmented with the return statement just for quap. I think I remember Richard somewhere in the book mentions that quap might sometimes need some coaching. Rob |
Yes, this would be nicer, right now with the model in the file (I guess that's m4.1 or something?)
won't work correctly (I think, haven't tested). It's pretty easy to get the distributions needed:
which means that
should work pretty well. I'm a little weary because of the hardcoded If I change the model to
then In spite of those few open questions this seems to me like a good idea and a way better solution than what I cooked up. |
Hmm, not sure if I didn't like your initial approach better then adding the So in m4.1t, your solution would be:
which can then be used as in your example:
while script m4.1t.jl works as before. I actually don't mind at all if we would ask our |
I'm not sure if we aren't talking past each other. The So basically, there are three possible versions given this model:
With
all 3 optimizations result in (approximately)
and NUTS
|
It seems like this PR might need a rework soon anyways 😄 |
Yeah, after TuringLang/Turing.jl#1230 goes through you should be able to just do using Turing
using Optim
@model function something()
some kinda model
end
model = something()
# To account for the prior probability
optimize(model, MAP())
# If you just want maximum likelihood
optimize(model, MLE()) |
Cool! |
@karajan9 Is your decision to hold off further work on quap-turing for now until #1230 is released? If so, I hope it was still worthwhile the effort you put in! I certainly learned again some stuff and it was good to refresh my Turing memory a bit! |
I think so, yes. It sounds like it's going to be merged relatively soon, and in the meantime I hope the current version will do. |
This takes a Turing model and performs quadratic approximation, which means finding the maximum of the loglikelihood (or the minimum of the NLL) and taking the hessian to find the variance and covariance of the parameters.
The goal is to make it just work™ which is why I put some tricks in there like sampling from the prior to find a starting point and chaining optimizers to make sure it finds a good optimum. I tried to put some checks in there, still, there are probably many ways to make this break.
On the simple model in the file it takes ~5ms on my computer, so I didn't take much time to make it any faster.
Since I consider this very experimental, there is still a list of things that need to be done:
precis