Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: MethodError: no method matching length(::Nothing) #768

Closed
wocaishiniliu opened this issue Sep 29, 2022 · 2 comments
Closed

ERROR: MethodError: no method matching length(::Nothing) #768

wocaishiniliu opened this issue Sep 29, 2022 · 2 comments

Comments

@wocaishiniliu
Copy link

I am a total newbie in this package...
I tried to run the code in this link: https://sebastiancallh.github.io/post/neural-ode-weather-forecast/

Of course, I change some code to fit the new vision of Julia.

Question : sciml_train has been banned?

In this part, I am not sure if lower_bounds and upper_bounds parameters are wrong.

             res = DiffEqFlux.sciml_train(
                       loss, θ, opt;lower_bounds=[-3.0, -3.0], upper_bounds=[6.8, 6.8],
                 maxiters = maxiters;
                 kwargs...
             )

Here in my code after changed

using DataFrames, CSV
delhi_train = CSV.read("E:\\julia_scipts\\DailyDelhiClimateTrain.csv",DataFrame)
delhi_test = CSV.read("E:\\julia_scipts\\DailyDelhiClimateTest.csv",DataFrame)
delhi = vcat(delhi_train, delhi_test)

using Statistics, Dates
using Base.Iterators: take, cycle

delhi[:,:year] = Float64.(Dates.year.(delhi[:,:date]))
delhi[:,:month] = Float64.(Dates.month.(delhi[:,:date]))
df_mean = combine(groupby(delhi, [:year, :month]),[:meantemp, :humidity, :wind_speed, :meanpressure] .=> mean)
rename!(df_mean, [:year, :month, :meantemp,
                :humidity, :wind_speed, :meanpressure])

df_mean[!,:date] .= df_mean[:,:year] .+ df_mean[:,:month] ./ 12;

features = [:meantemp, :humidity, :wind_speed, :meanpressure]

t = df_mean[:, :date] |>
      t -> t .- minimum(t) |>
      t -> reshape(t, 1, :)

y = df_mean[:, features] |>
      y -> Matrix(y)' |>
      y -> (y .- mean(y, dims = 2)) ./ std(y, dims = 2)

T = 20
train_dates = df_mean[1:T, :date]
test_dates = df_mean[T+1:end, :date]
train_t, test_t = t[1:T], t[T:end]
train_y, test_y = y[:,1:T], y[:,T:end];



using Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots

  function neural_ode(t, data_dim; saveat = t)
      f =  Lux.Chain(Lux.Dense(data_dim, 64, swish),
                  Lux.Dense(64, 32, swish),
                  Lux.Dense(32, data_dim))

      node = NeuralODE(f, (minimum(t), maximum(t)), Tsit5(),
                       saveat = saveat, abstol = 1e-9,
                       reltol = 1e-9)
  end

 function train_one_round(node, θ, y, opt, maxiters,
                                  y0 = y[:, 1]; kwargs...)
             predict(θ) = Array(node(y0, θ))
             loss(θ) = begin
                 ŷ = predict(θ)
                 Flux.mse(ŷ, y)
             end

             θ = θ == nothing ? node.p : θ
             res = DiffEqFlux.sciml_train(
                       loss, θ, opt;lower_bounds=[-3.0, -3.0], upper_bounds=[6.8, 6.8],
                 maxiters = maxiters;
                 kwargs...
             )
             return res.minimizer
               end
  function train(θ = nothing, maxiters = 150, lr = 1e-2)
      log_results(θs, losses) =
          (θ, loss) -> begin
              push!(θs, copy(θ))
              push!(losses, loss)
              false
          end

      θs, losses = [], []
      num_obs = 4:4:length(train_t)
      for k in num_obs
          node = neural_ode(train_t[1:k], size(y, 1))
          θ = train_one_round(
              node, θ, train_y[:, 1:k],
              ADAMW(lr), maxiters;
              cb = log_results(θs, losses)
          )
      end
      θs, losses
  end

  Random.seed!(1)
  θs, losses = train();

Here is the error messages

┌ Warning: sciml_train is being deprecated in favor of direct usage of Optimization.jl. Please consult the Optimization.jl documentation for more details. Optimization.jl's PolyOpt solver is the polyalgorithm of sciml_train
└ @ DiffEqFlux C:\Users\a li.julia\packages\DiffEqFlux\Em1Aj\src\train.jl:6
ERROR: MethodError: no method matching length(::Nothing)
Closest candidates are:
length(::Union{Base.KeySet, Base.ValueIterator}) at abstractdict.jl:58
length(::CUDA.CUSPARSE.CuSparseMatrix) at C:\Users\a li.julia\packages\CUDA\DfvRa\lib\cusparse\array.jl:223
length(::Union{NNlib.BatchedAdjoint{T, S}, NNlib.BatchedTranspose{T, S}} where {T, S}) at C:\Users\a li.julia\packages\NNlib\0QnJJ\src\batched\batchedadjtrans.jl:58
...
Stacktrace:
[1] sciml_train(::var"#loss#53"{Matrix{Float64}, var"#predict#52"{NeuralODE{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, Nothing, Nothing, Tuple{Float64, Float64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{Vector{Float64}, Float64, Float64}}}}, Vector{Float64}}}, ::Nothing, ::Flux.Optimise.Optimiser, ::Nothing; lower_bounds::Vector{Float64}, upper_bounds::Vector{Float64}, cb::Function, callback::Function, maxiters::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ DiffEqFlux C:\Users\a li.julia\packages\DiffEqFlux\Em1Aj\src\train.jl:9
[2] train_one_round(node::NeuralODE{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, Nothing, Nothing, Tuple{Float64, Float64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{Vector{Float64}, Float64, Float64}}}}, θ::Nothing, y::Matrix{Float64}, opt::Flux.Optimise.Optimiser, maxiters::Int64, y0::Vector{Float64}; kwargs::Base.Pairs{Symbol, var"#28#30"{Vector{Any}, Vector{Any}}, Tuple{Symbol}, NamedTuple{(:cb,), Tuple{var"#28#30"{Vector{Any}, Vector{Any}}}}})
@ Main .\REPL[59]:10
[3] train(θ::Nothing, maxiters::Int64, lr::Float64)
@ Main .\REPL[38]:13
[4] train()
@ Main .\REPL[38]:2
[5] top-level scope
@ REPL[60]:1
[6] top-level scope
@ C:\Users\a li.julia\packages\CUDA\DfvRa\src\initialization.jl:52

@ChrisRackauckas
Copy link
Member

It hasn't been "banned", it's just deprecated because that function became a fully doumented package of its own, Optimization.jl

┌ Warning: sciml_train is being deprecated in favor of direct usage of Optimization.jl. Please consult the Optimization.jl
documentation for more details. Optimization.jl's PolyOpt solver is the polyalgorithm of sciml_train

Now as for your issue:

└ @ DiffEqFlux C:\Users\a li.julia\packages\DiffEqFlux\Em1Aj\src\train.jl:6
ERROR: MethodError: no method matching length(::Nothing)

It's saying θ is nothing. With Lux, node.p is nothing. You should do p, st = Lux.setup(rng, f). This is explained in the tutorials:

https://diffeqflux.sciml.ai/dev/examples/neural_ode/

@wocaishiniliu
Copy link
Author

Great, it helps a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants