Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chain ranges differ #30

Closed
itsdfish opened this issue Jul 10, 2019 · 6 comments
Closed

Chain ranges differ #30

itsdfish opened this issue Jul 10, 2019 · 6 comments

Comments

@itsdfish
Copy link
Collaborator

Strange. This might be related to this issue with CmdStan. I'll try to look into this further by the weekend.

My best guess is that I made an error in excluding adaption trials or the chain object has the wrong range because something changed in Turing or CmdStan.

Here is the error from @xukai92:

I also tried increasing the number of simulations. The density plot looks better now :)
MCMCBenchmarkGaussian.zip

@itsdfish Also, I attempted to increase the number of iterations, and meet this error:

ERROR: LoadError: On worker 4:
ArgumentError: chain ranges differ
cat3 at /Users/kai/.julia/packages/MCMCChains/loNyJ/src/chains.jl:683
#cat#39 at /Users/kai/.julia/packages/MCMCChains/loNyJ/src/chains.jl:625 [inlined]
#cat at ./none:0 [inlined]
chainscat at /Users/kai/.julia/packages/MCMCChains/loNyJ/src/chains.jl:695
_mapreduce at ./reduce.jl:313
_mapreduce_dim at ./reducedim.jl:308
#mapreduce#548 at ./reducedim.jl:304 [inlined]
mapreduce at ./reducedim.jl:304 [inlined]
#reduce#549 at ./reducedim.jl:348 [inlined]
reduce at ./reducedim.jl:348
#cross_samplerRhat!#39 at /Users/kai/projects/MCMCBenchmarks.jl/src/MCMCBenchmarks.jl:108
#cross_samplerRhat! at ./none:0
#benchmark!#36 at /Users/kai/projects/MCMCBenchmarks.jl/src/MCMCBenchmarks.jl:94
#benchmark! at ./none:0
#benchmark#41 at /Users/kai/projects/MCMCBenchmarks.jl/src/MCMCBenchmarks.jl:138
#benchmark at ./none:0 [inlined]
pfun at /Users/kai/projects/MCMCBenchmarks.jl/src/MCMCBenchmarks.jl:152 [inlined]
#43 at /Users/kai/projects/MCMCBenchmarks.jl/src/MCMCBenchmarks.jl:154
#112 at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/process_messages.jl:269
run_work_thunk at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/process_messages.jl:56
macro expansion at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/process_messages.jl:269 [inlined]
#111 at ./task.jl:259
Stacktrace:
 [1] (::getfield(Base, Symbol("##696#698")))(::Task) at ./asyncmap.jl:178
 [2] foreach(::getfield(Base, Symbol("##696#698")), ::Array{Any,1}) at ./abstractarray.jl:1866
 [3] maptwice(::Function, ::Channel{Any}, ::Array{Any,1}, ::Array{Int64,1}) at ./asyncmap.jl:178
 [4] #async_usemap#681 at ./asyncmap.jl:154 [inlined]
 [5] #async_usemap at ./none:0 [inlined]
 [6] #asyncmap#680 at ./asyncmap.jl:81 [inlined]
 [7] #asyncmap at ./none:0 [inlined]
 [8] #pmap#215(::Bool, ::Int64, ::Nothing, ::Array{Any,1}, ::Nothing, ::Function, ::Function, ::WorkerPool, ::Array{Int64,1}) at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:126
 [9] pmap(::Function, ::WorkerPool, ::Array{Int64,1}) at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:101
 [10] #pmap#225(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Function, ::Array{Int64,1}) at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:156
 [11] pmap at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:156 [inlined]
 [12] #pbenchmark#42(::Base.Iterators.Pairs{Symbol,Any,NTuple{4,Symbol},NamedTuple{(:Nsamples, :Nadapt, :delta, :Nd),Tuple{Int64,Int64,Float64,Array{Int64,1}}}}, ::Function, ::Tuple{CmdStanNUTS{Stanmodel},AHMCNUTS{typeof(AHMCGaussian),WARNING: both AdvancedHMC and DynamicHMC export "NUTS"; uses of it in module MCMCBenchmarks must be qualified
Turing.Inference.NUTS{Turing.Core.ForwardDiffAD{40},(),DiagEuclideanMetric}}}, ::Function, ::Int64) at /Users/kai/projects/MCMCBenchmarks.jl/src/MCMCBenchmarks.jl:154
 [13] (::getfield(MCMCBenchmarks, Symbol("#kw##pbenchmark")))(::NamedTuple{(:Nsamples, :Nadapt, :delta, :Nd),Tuple{Int64,Int64,Float64,Array{Int64,1}}}, ::typeof(pbenchmark), ::Tuple{CmdStanNUTS{Stanmodel},AHMCNUTS{typeof(AHMCGaussian),Turing.Inference.NUTS{Turing.Core.ForwardDiffAD{40},(),DiagEuclideanMetric}}}, ::Function, ::Int64) at ./none:0
 [14] top-level scope at none:0
 [15] include at ./boot.jl:326 [inlined]
 [16] include_relative(::Module, ::String) at ./loading.jl:1038
 [17] include(::Module, ::String) at ./sysimg.jl:29
 [18] exec_options(::Base.JLOptions) at ./client.jl:267
 [19] _start() at ./client.jl:436
in expression starting at /Users/kai/projects/MCMCBenchmarks.jl/Examples/Gaussian/Gaussian_Example.jl:53

Any idea where is it from?

@goedman
Copy link
Member

goedman commented Jul 10, 2019

Chris, earlier there was some mention of a change in Turing dropping the warmup samples. I don't think that has happened yet. Could that be the culprit?

@itsdfish
Copy link
Collaborator Author

Hey Rob, I can confirm that Turing v0.6.18, the newest version, automatically excludes warmup samples. Kai added a fix to reflect that change. So my best guess is that I made in error the the logic, which only became noticeable once the iterations were changed.

@goedman
Copy link
Member

goedman commented Jul 10, 2019

Ah, good to know and in that case I also need to update the TuringModels.jl repo.

Today I wanted to setup the docs framework for MCMCBenchmarks.

@itsdfish
Copy link
Collaborator Author

The problem was that I was only changing 1 of the 2 fields in the CmdStan configuration object for num_samples and num_warmup, which caused the chain range problem.

In the process of debugging, I ran a new benchmark and saw that Turing is closing the gap on AdvancedHMC :

summary_time.pdf

See the old benchmark for comparison.

It looks like Turing still has more memory allocations:

summary_allocations.pdf

@goedman
Copy link
Member

goedman commented Jul 11, 2019

Very nice improvement! Should we drop the dynamicNUTS option? I don”t it will ever be mainstream and if it is not maintained by the Turing team it will be hard to get it reliably working.

@itsdfish
Copy link
Collaborator Author

I agree. Dropping DynamicNUTS is a good idea. I'll make an issue and remove it over the weekend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants