You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One rarely knows ahead of time how long one should run one's chains. So it would be really nice if one could start a chain running, perhaps on a cluster node, with no fixed end time. In this ideal world, one could then load the progress-so-far into another process and use pymc's usual tools to examine the chain.
This appears to be almost possible: you can write a loop that runs the chain for a modest number of steps, then commit()s the database to disk, then starts a new chain. The new chain begins at the place the old chain left off, so one doesn't have to restart the burn-in process. It's not clear to me, from either the documentation or the code, whether the various tuning processes retain their state across this interruption. If they do, then this is almost a solution. I say "almost" because it's also not clear whether .commit() is supposed to produce a consistent database (in my tests with the txt backend it seems that it sometimes doesn't).
Alternatively, if the database reliably saves the step method's state, one could close the database then reopen it and start a new chain.
But wouldn't it be nice if the sampling process had a checkpointing function? Something like isample's pause function, that saved the database in a consistent state I could copy elsewhere and use.
The text was updated successfully, but these errors were encountered:
We have had requests similar to this before (e.g. #143), though chains without a pre-defined sampling length is a new one. I think it is clear that PyMC 3 ought to be able to continue sampling from the point where the last chain left off, and be able to save and restore the state of the sampler.
One idea that we might consider is to monitor the Gelman-Rubin statistic at regular intervals during sampling (of multiple chains) and provide some notification when all stochastics are within some distance of Rhat=1. From this it would be easy to implement something that continues to sample until convergence criteria are met.
Note also that the Hamiltonian MC step methods in PyMC 3 should help deal with convergence issues in a lot of large problems. I encourage you to try them out as we get closer to a beta release.
One rarely knows ahead of time how long one should run one's chains. So it would be really nice if one could start a chain running, perhaps on a cluster node, with no fixed end time. In this ideal world, one could then load the progress-so-far into another process and use pymc's usual tools to examine the chain.
This appears to be almost possible: you can write a loop that runs the chain for a modest number of steps, then commit()s the database to disk, then starts a new chain. The new chain begins at the place the old chain left off, so one doesn't have to restart the burn-in process. It's not clear to me, from either the documentation or the code, whether the various tuning processes retain their state across this interruption. If they do, then this is almost a solution. I say "almost" because it's also not clear whether .commit() is supposed to produce a consistent database (in my tests with the txt backend it seems that it sometimes doesn't).
Alternatively, if the database reliably saves the step method's state, one could close the database then reopen it and start a new chain.
But wouldn't it be nice if the sampling process had a checkpointing function? Something like isample's pause function, that saved the database in a consistent state I could copy elsewhere and use.
The text was updated successfully, but these errors were encountered: