-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Markov chain leads to memory saturation #568
Comments
Keep in mind that this will create 365 Gurobi models. If each of these has thousands of variables and constraints then yes... it can use a lot of RAM. But I wouldn't say it was normal. I've solved bigger problems on a smaller machine. How big is each subproblem? (SDDP.jl prints out some summary statistics are the start of each run.) It also looks like the error is when you are trying to visualize the value function after a training? One approach could be to use |
Hi Oscar,
Actually I use ValueFunction for extracting the "water values" of my problem, and those are very valuable to me. I use this function inside SDDP.simulate. I think the problem is linked to that simulate process indeed, since the training phase is working well, but the simulate phase saturates the RAM after more than 10 runs... I just ran my code without calling ValueFunction and it worked perfectly. Thanks a lot for your help PS: here is the block of code that creates the error 10001, inside the
|
for t in 1:N_red # looping through the stages
for sim in 1:Nbsimus # looping through the simulations
markov_2 = Int(sims[sim][t][:markov])
V = SDDP.ValueFunction(model, node = (t, markov_2)) # node identification This is creating a new Gurobi model for every One quick fix would be something like value_functions = Dict(
node => SDDP.ValueFunction(model; node = node) for node in keys(model.nodes)
)
op_cost_H2 = Array{Float64}(undef, N_red, Nbsimus)
for t in 1:N_red # looping through the stages
for sim in 1:Nbsimus # looping through the simulations
markov_2 = Int(sims[sim][t][:markov])
V = value_functions[(t, markov_2)] # node identification
levels = Dict(
"volume_PHS" => sims[sim][t][:volume_PHS].out,
"volume_H2" => sims[sim][t][:volume_H2].out,
"volume_BAT" => sims[sim][t][:volume_BAT].out,
)
c, p = SDDP.evaluate(V, levels)
op_cost_H2[t, sim] = -p[Symbol("volume_H2")] # water values
end
end
op_cost_H2_df = DataFrame(op_cost_H2, :auto) |
Thank you, I'll try it as soon as I can. |
Even if it works, it'll be much slower than necessary
Yep. There are quite a few differences behind the scenes. |
This should point you in the right direction: using SDDP
import DataFrames
import HiGHS
model = SDDP.LinearPolicyGraph(
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)
@variable(sp, g_t >= 0)
@variable(sp, g_h >= 0)
@variable(sp, s >= 0)
SDDP.parameterize(ω -> set_normalized_rhs(balance, ω), sp, [0, 50, 100])
@constraint(sp, balance, x.out == x.in - g_h - s + 0.0)
@constraint(sp, demand_constraint, g_h + g_t == 150)
@stageobjective(sp, 50 * t * g_t)
end
SDDP.train(model; iteration_limit = 10)
simulations = SDDP.simulate(
model,
1,
[:x];
custom_recorders = Dict{Symbol,Function}(
:x_price_out => sp -> reduced_cost(sp[:x].out),
),
)
results_simulate = DataFrames.DataFrame(vec([
(
simulation = i,
node = data[:node_index],
x = data[:x].out,
x_price = data[:x_price_out],
)
for (i, sim) in enumerate(simulations) for data in sim
]))
V = Dict(node => SDDP.ValueFunction(model; node = node) for node in keys(model.nodes))
results = Any[]
for (i, sim) in enumerate(simulations), data in sim
cost, price = SDDP.evaluate(V[data[:node_index]], Dict("x" => data[:x].out))
push!(
results,
(
simulation = i,
node = data[:node_index],
x = data[:x].out,
x_price = price[:x],
),
)
end
results_value_function = DataFrames.DataFrame(results) Which method to use will depend on what price you want to compute. The two methods differ by the cheapest marginal cost of thermal ($50) because the julia> results_simulate
3×4 DataFrame
Row │ simulation node x x_price
│ Int64 Int64 Float64 Float64
─────┼──────────────────────────────────────
1 │ 1 1 200.0 -33.3333
2 │ 1 2 150.0 0.0
3 │ 1 3 -0.0 0.0
julia> results_value_function
3×4 DataFrame
Row │ simulation node x x_price
│ Int64 Int64 Float64 Float64
─────┼──────────────────────────────────────
1 │ 1 1 200.0 -83.3333
2 │ 1 2 150.0 0.0
3 │ 1 3 -0.0 0.0 |
Is there anything actionable to do here? If not, I will close. |
Yes, sorry, it's all good to me. |
No problem. Let me know if you have other questions |
Hi,
I'm struggling with my code. I had a model with 365 nodes that I used for a power dispatch problem on an electricity system. I wanted to add a Markov variable accounting for a supply disruption in gas (ie state 1 when imports are available, state 2 when they are not).
Since I know my laptop is not a super calculator (32GB RAM), I decided to reduce the problem size by representing the system with only 52 temporal nodes, each one of them with 2 possible states for the Markov variable, ending up with 104 nodes in total.
Still, my machine seems unable to simulate more than 10 scenarios, while it could simulate more than 500 with the 365 nodes version of my problem. Here Gurobi sends me "Error 10001"...
Is it normal ? Is the only solution to find a computer with more RAM ?
Thanks a lot for your help,
Ange
PS: here is the error message I get
ERROR: Gurobi Error 10001:
Stacktrace:
[1] _check_ret
@ C:\Users\Ange.blanchard.julia\packages\Gurobi\FliRK\src\MOI_wrapper\MOI_wrapper.jl:393 [inlined]
[2] Gurobi.Env(; output_flag::Int64, memory_limit::Nothing, started::Bool)
@ Gurobi C:\Users\Ange.blanchard.julia\packages\Gurobi\FliRK\src\MOI_wrapper\MOI_wrapper.jl:103
[3] Env
@ C:\Users\Ange.blanchard.julia\packages\Gurobi\FliRK\src\MOI_wrapper\MOI_wrapper.jl:95 [inlined]
[4] Gurobi.Optimizer(env::Nothing; enable_interrupts::Bool)
@ Gurobi C:\Users\Ange.blanchard.julia\packages\Gurobi\FliRK\src\MOI_wrapper\MOI_wrapper.jl:324
[5] Optimizer (repeats 2 times)
@ C:\Users\Ange.blanchard.julia\packages\Gurobi\FliRK\src\MOI_wrapper\MOI_wrapper.jl:318 [inlined]
[6] _instantiate_and_check(optimizer_constructor::Any)
@ MathOptInterface C:\Users\Ange.blanchard.julia\packages\MathOptInterface\cl3eR\src\instantiate.jl:94
[7] _instantiate_and_check(optimizer_constructor::MathOptInterface.OptimizerWithAttributes)
@ MathOptInterface C:\Users\Ange.blanchard.julia\packages\MathOptInterface\cl3eR\src\instantiate.jl:117
[8] instantiate(optimizer_constructor::Any; with_bridge_type::Type{Float64})
@ MathOptInterface C:\Users\Ange.blanchard.julia\packages\MathOptInterface\cl3eR\src\instantiate.jl:149
[9] set_optimizer(model::Model, optimizer_constructor::Any; add_bridges::Bool)
@ JuMP C:\Users\Ange.blanchard.julia\packages\JuMP\yYfHy\src\optimizer_interface.jl:405
[10] set_optimizer
@ C:\Users\Ange.blanchard.julia\packages\JuMP\yYfHy\src\optimizer_interface.jl:398 [inlined]
[11] SDDP.ValueFunction(node::SDDP.Node{Tuple{Int64, Int64}})
@ SDDP C:\Users\Ange.blanchard.julia\packages\SDDP\OoTfR\src\visualization\value_functions.jl:113
[12] #ValueFunction#248
@ C:\Users\Ange.blanchard.julia\packages\SDDP\OoTfR\src\visualization\value_functions.jl:105 [inlined]
[13] top-level scope
@ c:\Users\Ange.blanchard\Nextcloud\H2 SDDP\GITHUB_H2_SDDP\CODE\TRAINING_markov.jl:87
The text was updated successfully, but these errors were encountered: