Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement multi-cut #185

Merged
merged 9 commits into from
Mar 21, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions docs/src/tutorial/15_performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,17 @@ speed-ups.
pay to change these. In particular, forcing solvers to use the dual simplex
algorithm (e.g., [`Method=1` in Gurobi](https://www.gurobi.com/documentation/8.1/refman/method.html)
) is usually a performance win.

## Average cut vs multi-cut

There are two competing ways that cuts can be created in SDDP: _average_ cut and
_multi_-cut. These can be specified as follows
```julia
SDDP.train(model; cut_type = SDDP.AVERAGE_CUT)
SDDP.train(model; cut_type = SDDP.MULTI_CUT)
```

The performance of each method is problem-dependent. We recommend that you try
both in order to see which one performs better. In general, the _average_ cut
method works better when the number of realizations of the stagewise-independent
random variable is large.
1 change: 1 addition & 0 deletions docs/src/upgrading_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ v1.0.

Some of the highlights of the new release include

- Support for "multi-cut"s
- Support for stagewise-independent noise in the constraint matrix
- A way to simulate out-of-sample realizations of the stagewise-independent
noise terms
Expand Down
2 changes: 1 addition & 1 deletion examples/FAST_hydro_thermal.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ using SDDP, GLPK, Test

function fast_hydro_thermal()
model = SDDP.PolicyGraph(SDDP.LinearGraph(2),
bellman_function = SDDP.AverageCut(lower_bound = 0.0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0.0),
optimizer = with_optimizer(GLPK.Optimizer)
) do sp, t
@variable(sp, 0 <= x <= 8, SDDP.State, initial_value = 0.0)
Expand Down
8 changes: 5 additions & 3 deletions examples/FAST_production_management.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,15 @@

using SDDP, GLPK, Test

function fast_production_management()
function fast_production_management(;cut_type)
DEMAND = [2, 10]
H = 3
N = 2
C = [0.2, 0.7]
S = 2 .+ [0.33, 0.54]
model = SDDP.PolicyGraph(SDDP.LinearGraph(H),
bellman_function = SDDP.AverageCut(lower_bound = -50.0),
bellman_function = SDDP.BellmanFunction(
lower_bound = -50.0, cut_type = cut_type),
optimizer = with_optimizer(GLPK.Optimizer)
) do sp, t
@variable(sp, x[1:N] >= 0, SDDP.State, initial_value = 0.0)
Expand All @@ -38,4 +39,5 @@ function fast_production_management()
@test SDDP.calculate_bound(model) ≈ -23.96 atol = 1e-2
end

fast_production_management()
fast_production_management(cut_type = SDDP.AVERAGE_CUT)
fast_production_management(cut_type = SDDP.MULTI_CUT)
2 changes: 1 addition & 1 deletion examples/FAST_quickstart.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ using SDDP, GLPK, Test

function fast_quickstart()
model = SDDP.PolicyGraph(SDDP.LinearGraph(2),
bellman_function = SDDP.AverageCut(lower_bound = -5),
bellman_function = SDDP.BellmanFunction(lower_bound = -5),
optimizer = with_optimizer(GLPK.Optimizer)
) do sp, t
@variable(sp, x >= 0, SDDP.State, initial_value = 0.0)
Expand Down
2 changes: 1 addition & 1 deletion examples/StochDynamicProgramming.jl_multistock.jl
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ using SDDP, GLPK, Test
function test_multistock_example()
model = SDDP.PolicyGraph(SDDP.LinearGraph(5),
optimizer = with_optimizer(GLPK.Optimizer),
bellman_function = SDDP.AverageCut(lower_bound = -5)
bellman_function = SDDP.BellmanFunction(lower_bound = -5)
) do subproblem, stage
@variable(subproblem,
0 <= stock[i=1:3] <= 1, SDDP.State, initial_value = 0.5)
Expand Down
2 changes: 1 addition & 1 deletion examples/StochDynamicProgramming.jl_stock.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ using SDDP, GLPK, Test

function stock_example()
model = SDDP.PolicyGraph(SDDP.LinearGraph(5),
bellman_function = SDDP.AverageCut(lower_bound = -2),
bellman_function = SDDP.BellmanFunction(lower_bound = -2),
optimizer = with_optimizer(GLPK.Optimizer)
) do sp, stage

Expand Down
2 changes: 1 addition & 1 deletion examples/StructDualDynProg.jl_prob5.2_2stages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ using SDDP, GLPK, Test

function test_prob52_2stages()
model = SDDP.PolicyGraph(SDDP.LinearGraph(2),
bellman_function = SDDP.AverageCut(lower_bound = 0.0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0.0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, stage
# ========== Problem data ==========
Expand Down
2 changes: 1 addition & 1 deletion examples/StructDualDynProg.jl_prob5.2_3stages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ using SDDP, GLPK, Test

function test_prob52_3stages()
model = SDDP.PolicyGraph(SDDP.LinearGraph(3),
bellman_function = SDDP.AverageCut(lower_bound = 0.0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0.0),
optimizer = with_optimizer(GLPK.Optimizer)
) do sp, t
n = 4
Expand Down
2 changes: 1 addition & 1 deletion examples/agriculture_mccardle_farm.jl
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ function test_mccardle_farm_model()
])

model = SDDP.PolicyGraph(graph,
bellman_function = SDDP.AverageCut(lower_bound = 0.0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0.0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, index
stage, weather = index
Expand Down
2 changes: 1 addition & 1 deletion examples/asset_management_simple.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ function asset_management_simple()
[0.5 0.5; 0.5 0.5],
[0.5 0.5; 0.5 0.5]
]),
bellman_function = SDDP.AverageCut(lower_bound = -1_000.0),
bellman_function = SDDP.BellmanFunction(lower_bound = -1_000.0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, index
(stage, markov_state) = index
Expand Down
64 changes: 64 additions & 0 deletions examples/asset_management_stagewise.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Copyright 2018, Oscar Dowson
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#############################################################################

#==
Modified version of the Asset Management problem taken from

J. R. Birge, F. Louveaux, Introduction to Stochastic Programming,
Springer Series in Operations Research and Financial Engineering,
Springer New York, New York, NY, 2011
==#

using SDDP, GLPK, Test

function asset_management_stagewise(; cut_type)
ws = [1.25, 1.06]
wb = [1.14, 1.12]
Phi = [-1, 5]
Psi = [0.02, 0.0]

model = SDDP.MarkovianPolicyGraph(
sense = :Max,
transition_matrices = Array{Float64, 2}[
[1.0]', [0.5 0.5], [0.5 0.5; 0.5 0.5], [0.5 0.5; 0.5 0.5]],
bellman_function = SDDP.BellmanFunction(
upper_bound = 1000.0, cut_type = cut_type),
optimizer = with_optimizer(GLPK.Optimizer)) do subproblem, node
t, i = node
@variable(subproblem, xs >= 0, SDDP.State, initial_value=0)
@variable(subproblem, xb >= 0, SDDP.State, initial_value=0)
if t == 1
@constraint(subproblem, xs.out + xb.out == 55 + xs.in + xb.in)
@stageobjective(subproblem, 0)
elseif t == 2 || t == 3
@variable(subproblem, phi)
@constraint(subproblem, ws[i] * xs.in + wb[i] * xb.in + phi == xs.out + xb.out)
SDDP.parameterize(subproblem, [1, 2], [0.6, 0.4]) do ω
JuMP.fix(phi, Phi[ω])
@stageobjective(subproblem, Psi[ω] * xs.out)
end
else
@variable(subproblem, u >= 0)
@variable(subproblem, v >= 0)
@constraint(subproblem, ws[i] * xs.in + wb[i] * xb.in + u - v == 80)
@stageobjective(subproblem, -4u + v)
end
end
SDDP.train(model;
iteration_limit = 100,
print_level = 0,
risk_measure = (node) -> begin
if node[1] != 3
SDDP.Expectation()
else
SDDP.EAVaR(lambda = 0.5, beta=0.5)
end
end)
@test SDDP.calculate_bound(model) ≈ 1.278 atol=1e-3
end

asset_management_stagewise(cut_type = SDDP.AVERAGE_CUT)
asset_management_stagewise(cut_type = SDDP.MULTI_CUT)
2 changes: 1 addition & 1 deletion examples/biobjective_hydro.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ using SDDP, GLPK, Statistics, Test

function biobjective_hydro()
model = SDDP.PolicyGraph(SDDP.LinearGraph(3),
bellman_function = SDDP.AverageCut(lower_bound = 0.0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0.0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, stage
@variable(subproblem, 0 <= v <= 200, SDDP.State, initial_value = 50)
Expand Down
2 changes: 1 addition & 1 deletion examples/complicated_hydro.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ const POWER_KNOTS = [55.0, 65.0, 70.0]
model = SDDP.LinearPolicyGraph(
stages = T,
sense = :Min,
bellman_function = SDDP.AverageCut(
bellman_function = SDDP.BellmanFunction(
lower_bound = 0,
deletion_minimum = 1_000_000
),
Expand Down
2 changes: 1 addition & 1 deletion examples/infinite_horizon/ball_on_beam.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ function infinite_ball_on_beam()
[(:root_node => :time_step, 1.0), (:time_step => :time_step, 0.999)]
)
model = SDDP.PolicyGraph(graph,
bellman_function = SDDP.AverageCut(lower_bound = 0.0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0.0),
optimizer = with_optimizer(Ipopt.Optimizer, print_level = 0),
direct_mode = false,
sense = :Min
Expand Down
2 changes: 1 addition & 1 deletion examples/infinite_horizon/powder.jl
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ function infinite_powder(; discount_factor = 0.75, stocking_rate::Float64 = NaN,

model = SDDP.PolicyGraph(graph,
sense = :Max,
bellman_function = SDDP.AverageCut(upper_bound = 1e5),
bellman_function = SDDP.BellmanFunction(upper_bound = 1e5),
optimizer = with_optimizer(Gurobi.Optimizer, OutputFlag = 0)
) do subproblem, index
# Unpack the node index.
Expand Down
8 changes: 5 additions & 3 deletions examples/infinite_horizon_hydro_thermal.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

using SDDP, GLPK, Test, Statistics

function infinite_hydro_thermal()
function infinite_hydro_thermal(; cut_type)
Ω = [
(inflow = 0.0, demand = 7.5),
(inflow = 5.0, demand = 5),
Expand All @@ -17,7 +17,8 @@ function infinite_hydro_thermal()
[(:root_node => :week, 1.0), (:week => :week, 0.9)]
)
model = SDDP.PolicyGraph(graph,
bellman_function = SDDP.AverageCut(lower_bound = 0),
bellman_function = SDDP.BellmanFunction(
lower_bound = 0, cut_type = cut_type),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, node
@variable(subproblem,
Expand Down Expand Up @@ -55,4 +56,5 @@ function infinite_hydro_thermal()
@test sample_mean - sample_ci <= 119.167 <= sample_mean + sample_ci
end

infinite_hydro_thermal()
infinite_hydro_thermal(cut_type = SDDP.AVERAGE_CUT)
infinite_hydro_thermal(cut_type = SDDP.MULTI_CUT)
2 changes: 1 addition & 1 deletion examples/infinite_horizon_trivial.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ function infinite_trivial()
[(:root_node => :week, 1.0), (:week => :week, 0.9)]
)
model = SDDP.PolicyGraph(graph,
bellman_function = SDDP.AverageCut(lower_bound = 0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, node
@variable(subproblem, state, SDDP.State, initial_value = 0)
Expand Down
4 changes: 2 additions & 2 deletions examples/inventory_management.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ function infinite_lin_HD()
[(:root_node => :week, 1.0), (:week => :week, 0.95)]
)
model = SDDP.PolicyGraph(graph,
bellman_function = SDDP.AverageCut(lower_bound = 0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, node

Expand Down Expand Up @@ -63,7 +63,7 @@ function infinite_lin_DH()
]
)
model = SDDP.PolicyGraph(graph,
bellman_function = SDDP.AverageCut(lower_bound = 0),
bellman_function = SDDP.BellmanFunction(lower_bound = 0),
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, node
@variable(subproblem, -10 <= state <= 10, SDDP.State, initial_value = 0)
Expand Down
27 changes: 13 additions & 14 deletions examples/objective_state_newsvendor.jl
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@ function joint_distribution(; kwargs...)
return distribution[:]
end

function newsvendor_example()
function newsvendor_example(;cut_type)
model = SDDP.PolicyGraph(
SDDP.LinearGraph(3),
sense = :Max,
bellman_function = SDDP.AverageCut(upper_bound = 50.0),
upper_bound = 50.0,
optimizer = with_optimizer(GLPK.Optimizer)
) do subproblem, stage
@variables(subproblem, begin
Expand All @@ -57,17 +57,16 @@ function newsvendor_example()
@stageobjective(subproblem, price * u)
end
end
return model
SDDP.train(model,
iteration_limit = 100, print_level = 0, time_limit = 20.0,
cut_type = cut_type)
@test SDDP.calculate_bound(model) ≈ 4.04 atol=0.05
results = SDDP.simulate(model, 500)
objectives = [
sum(s[:stage_objective] for s in simulation) for simulation in results
]
@test round(Statistics.mean(objectives); digits = 2) ≈ 4.04 atol=0.1
end

model = newsvendor_example()
SDDP.train(model, iteration_limit = 100, print_level = 0)

results = SDDP.simulate(model, 500)
objectives = [
sum(s[:stage_objective] for s in simulation) for simulation in results
]
sample_mean = round(Statistics.mean(objectives); digits = 2)
sample_ci = round(1.96 * Statistics.std(objectives) / sqrt(500); digits = 2)
println("Confidence_interval = $(sample_mean) ± $(sample_ci)")
@test SDDP.calculate_bound(model) ≈ sample_mean atol = sample_ci
newsvendor_example(cut_type = SDDP.AVERAGE_CUT)
newsvendor_example(cut_type = SDDP.MULTI_CUT)
Loading