Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to debug unbounded models #656

Closed
hghayoom opened this issue Aug 24, 2023 · 11 comments
Closed

How to debug unbounded models #656

hghayoom opened this issue Aug 24, 2023 · 11 comments

Comments

@hghayoom
Copy link

hghayoom commented Aug 24, 2023

Hi Oscar.
I am trying to model a problem and I need to have access to variables from previous steps.I made a simpler example to address the issue specifically, I can buy materials from different resources ( Buy[f=1:NFact,i = 1:NSource, j = 1:NPaths]). but they can be delivered at different times. So I defined a (@variable(sp, pipeline[1:5], SDDP.State, initial_value = 0)), and each one of these Buy[c,1,r] can be delivered at different times. I tried to follow the documentation . the stability report says that it is stable. I can get results in the version without considering the delay. I tried different things but I couldn't find a solution. I even put all of i's in pipeline[i].out equal so that it is simpler but I couldn't get results.
I appreciate it if you could help me with that.

using SDDP, HiGHS, Test,Pkg,Plots , Plots.PlotMeasures,Gurobi

Horizon = 53
NFact=1
Nmat=1
NSource=7
NPaths=1
NDest=1

BUYCOST=24000
Sel_price=46000
Gamma=75/1000

model = SDDP.PolicyGraph(
    SDDP.LinearGraph(Horizon),
    bellman_function = SDDP.BellmanFunction(
        lower_bound = -50.0,
        cut_type = SDDP.MULTI_CUT,
        #cut_type=SDDP.SINGLE_CUT
    ),
    optimizer = Gurobi.Optimizer,
    
) do sp, t
    @variable(sp, 0<=Epsilon[i = 1:NDest] , SDDP.State, initial_value = 0.0)
    @variable(sp, pipeline[1:5], SDDP.State, initial_value = 0)
    @variables(sp, begin
            Production[i = 1:NFact]>=0
            pen_p[i = 1:NFact] >= 0
            pen_n[i = 1:NFact]>= 0
            pen1_p[i = 1:NFact] >= 0
            pen1_n[i = 1:NFact]>= 0

            Sell[f=1:NFact,i = 1:NDest, j = 1:NPaths] >=0# sell to Customer c in path r Sell_F1[di,ri]
            Buy[f=1:NFact,i = 1:NSource, j = 1:NPaths] >=0# Buy from mat m road r 
    end)
    @variable(sp, Demandt, SDDP.State, initial_value = 4552)

        # Add the random variable as a control variable
        @variable(sp, ε)

        Ω = [-312,188,688,1188]
        P = [9/24,12/24,2/24,1/24]
        SDDP.parameterize(sp, Ω, P) do ω
            return JuMP.fix(ε, ω)
        end
    @constraint(sp, Demandt.out == Demandt.in+ε )
    @constraint(sp, Epsilon[1].out==Demandt.in-sum(Sell[c,1,j] for c in 1:NDest for j in 1:NPaths)+Epsilon[1].in)

    @constraints(sp, begin
    [c = 1:NFact],Production[c]==sum(Sell[c,i,j] for i in 1:NDest for j in 1:NPaths)+pen_p[c] - pen_n[c]
    
    [c = 1:NFact],pipeline[5].in==Gamma*Production[c]+pen1_p[c] - pen1_n[c]
    end)
    @constraints(sp,begin
    pipeline[2].out == sum(Buy[c,1,r] for c in 1:NFact for r in 1:NPaths) #1China.Shanghai
    pipeline[3].out == sum(Buy[c,2,r] for c in 1:NFact for r in 1:NPaths) #2Indonesia.tanjungpriok
    pipeline[4].out == sum(Buy[c,3,r] for c in 1:NFact for r in 1:NPaths) #3Japan.Tokyo
    pipeline[2].out == sum(Buy[c,4,r] for c in 1:NFact for r in 1:NPaths) #4Australia.Fremantle
    pipeline[3].out == sum(Buy[c,5,r] for c in 1:NFact for r in 1:NPaths) #5Russia.Murmansk.Panama Canal
    pipeline[4].out == sum(Buy[c,6,r] for c in 1:NFact for r in 1:NPaths) #6canada	Vancouver
    pipeline[4].out == sum(Buy[c,7,r] for c in 1:NFact for r in 1:NPaths) #7China Arctic

       end)
    @constraint(sp, [i=2:5], pipeline[i].out == pipeline[i-1].in)
    @stageobjective(sp,
                sum(1e6*pen_p[i] for i in 1:NFact)
                +sum(1e6*pen_n[i] for i in 1:NFact)
                +sum(1e6*pen1_p[i] for i in 1:NFact)
                +sum(1e6*pen1_n[i] for i in 1:NFact)
                +1e6*Epsilon[1].out

                +sum(Buy[c,m,r]*BUYCOST for c in 1:NFact for r in 1:NPaths for m in 1:NSource)/1000
                -sum(Sell[f,m,r]*Sel_price for f in 1:NFact for r in 1:NPaths for m in 1:NDest)/1000

        )

end
SDDP.numerical_stability_report(model)
SDDP.train(model, iteration_limit = 5, print_level = 2, log_frequency = 5)
simulations = SDDP.simulate(
    # The trained model to simulate.
    model,
    # The number of replications.
    20,
    # A list of names to record the values of.
    sampling_scheme = SDDP.InSampleMonteCarlo(
            terminate_on_cycle = false,
            terminate_on_dummy_leaf = true,
    ),)

        )

end
SDDP.numerical_stability_report(model)
SDDP.train(model, iteration_limit = 5, print_level = 2, log_frequency = 5)
simulations = SDDP.simulate(
    # The trained model to simulate.
    model,
    # The number of replications.
    20,
    # A list of names to record the values of.
    sampling_scheme = SDDP.InSampleMonteCarlo(
            terminate_on_cycle = false,
            terminate_on_dummy_leaf = true,
    ),)
@odow
Copy link
Owner

odow commented Aug 24, 2023

Let me walk you through my thought process for how I debug your model:

  1. I run the code to check I can reproduce the bug, and I do this after each step.
  2. I tidy the formatting. "Clean" code is much easier to reason about. I put all the constraints together. I put all the variables together. I delete all unnecessary lines. I look for simplifications, like replacing sum(1e6*pen_p[i] for i in 1:NFact) with 1e6 * sum(pen_p)`. The fewer characters there are, the less room for bugs.
  3. I remove all specialized options. Where possible, just stick with the defaults. If it doesn't work for the defaults, it likely won't work if you try multi-cut etc.
  4. The termination status is INFEASIBLE_OR_UNBOUNDED. To check if the model is infeasible or unbounded, we could try solving without an objective. If the model is unbounded, we should now return a feasible solution.
  5. After commenting the objective, it now runs for a few iterations. Therefore, we know the problem is unbounded. That must mean that a variable in the objective can grow without limit.
  6. We're minimizing, so variables with a positive coefficient want to be small, and variables with a negative coefficient want to be big. Buy, pen_p, pen_n, pen1_p, pen1_n, and Epsilon[1].out all have positive coefficients, but they have a lower bound of 0, so they can't be the problem.
  7. Sell has a negative coefficient and no upper bound. So it must be the problem.

All of this means that your model has a formulation error.

@odow odow changed the title issue for accessing variables from a previous stage How to debug unbounded models Aug 24, 2023
@odow
Copy link
Owner

odow commented Aug 24, 2023

Also, where did you and @bolbolim find the suggestion to use SDDP.BellmanFunction?

Just do instead:

model = SDDP.LinearPolicyGraph(
    stages = Horizon,
    sense = :Min,
    lower_bound = -50.0,
    optimizer = Gurobi.Optimizer,    
) do sp, t

You can specify MULTI_CUT using the cut_type keyword argument in train: https://odow.github.io/SDDP.jl/stable/apireference/#SDDP.train

@hghayoom
Copy link
Author

Hi Oscar,
Thanks for spending time to debug it. You are honestly amazing.
The reason for this untidiness is that I start to model a problem from a small example, and I didn't know how big should be. therefore, in each step, I added a constraint and variable. but you are correct. definitely, I will make a tidy version. It helps a lot.

@hghayoom
Copy link
Author

Also, where did you and @bolbolim find the suggestion to use SDDP.BellmanFunction?

Just do instead:

model = SDDP.LinearPolicyGraph(
    stages = Horizon,
    sense = :Min,
    lower_bound = -50.0,
    optimizer = Gurobi.Optimizer,    
) do sp, t

You can specify MULTI_CUT using the cut_type keyword argument in train: https://odow.github.io/SDDP.jl/stable/apireference/#SDDP.train

That is interesting. I think I got it from one of your examples. But I searched in all of your examples and I couldn't find it.
I just used it once and after I realized it worked, I just copied it for the next examples. But I will change for the next runs. thanks for catching that

@hghayoom
Copy link
Author

Also, where did you and @bolbolim find the suggestion to use SDDP.BellmanFunction?

Just do instead:

model = SDDP.LinearPolicyGraph(
    stages = Horizon,
    sense = :Min,
    lower_bound = -50.0,
    optimizer = Gurobi.Optimizer,    
) do sp, t

You can specify MULTI_CUT using the cut_type keyword argument in train: https://odow.github.io/SDDP.jl/stable/apireference/#SDDP.train
Thanks for teaching how to debug. I think I found the problem. The problem is that I am giving to a unqeu variable multiple numbers rather than summing them in this constraint:

@constraints(sp,begin
    pipeline[2].out == sum(Buy[c,1,r] for c in 1:NFact for r in 1:NPaths) #1China.Shanghai
    pipeline[3].out == sum(Buy[c,2,r] for c in 1:NFact for r in 1:NPaths) #2Indonesia.tanjungpriok
    pipeline[4].out == sum(Buy[c,3,r] for c in 1:NFact for r in 1:NPaths) #3Japan.Tokyo
    pipeline[2].out == sum(Buy[c,4,r] for c in 1:NFact for r in 1:NPaths) #4Australia.Fremantle
    pipeline[3].out == sum(Buy[c,5,r] for c in 1:NFact for r in 1:NPaths) #5Russia.Murmansk.Panama Canal
    pipeline[4].out == sum(Buy[c,6,r] for c in 1:NFact for r in 1:NPaths) #6canada	Vancouver
    pipeline[4].out == sum(Buy[c,7,r] for c in 1:NFact for r in 1:NPaths) #7China Arctic

       end)

for example I have used

 pipeline[4].out 

three times and and made it equal to three parameters.
after this modification, I could get resaults.
Thank you sooooo much.

using SDDP, HiGHS, Test,Pkg,Plots , Plots.PlotMeasures,Gurobi

Horizon = 53
NFact=1
Nmat=1
NSource=7
NPaths=1
NDest=1

BUYCOST=24000
Sel_price=46000
Gamma=75/1000

model = SDDP.LinearPolicyGraph(
    stages = Horizon,
    sense = :Min,
    lower_bound = -50.0,
    optimizer = Gurobi.Optimizer,    
) do sp, t
    @variable(sp, 0<=Epsilon[i = 1:NDest] , SDDP.State, initial_value = 0.0)
    @variable(sp, pipeline[1:5], SDDP.State, initial_value = 0)
    @variables(sp, begin
            Production[i = 1:NFact]>=0
            pen_p[i = 1:NFact] >= 0
            pen_n[i = 1:NFact]>= 0
            pen1_p[i = 1:NFact] >= 0
            pen1_n[i = 1:NFact]>= 0

            Sell[f=1:NFact,i = 1:NDest, j = 1:NPaths] >=0# sell to Customer c in path r Sell_F1[di,ri]
            Buy[f=1:NFact,i = 1:NSource, j = 1:NPaths] >=0# Buy from mat m road r 
    end)
    @variable(sp, Demandt, SDDP.State, initial_value = 4552)

        # Add the random variable as a control variable
        @variable(sp, ε)

        Ω = [-312,188,688,1188]
        P = [9/24,12/24,2/24,1/24]
        SDDP.parameterize(sp, Ω, P) do ω
            return JuMP.fix(ε, ω)
        end
    @constraint(sp, Demandt.out == Demandt.in+ε )
    @constraint(sp, Epsilon[1].out==Demandt.in-sum(Sell[c,1,j] for c in 1:NDest for j in 1:NPaths)+Epsilon[1].in)

    @constraints(sp, begin
    [c = 1:NFact],Production[c]==sum(Sell[c,i,j] for i in 1:NDest for j in 1:NPaths)+pen_p[c] - pen_n[c]
    
    [c = 1:NFact],pipeline[5].in==Gamma*Production[c]+pen1_p[c] - pen1_n[c]
    end)
    @constraints(sp,begin
    pipeline[2].out == sum(Buy[c,1,r] for c in 1:NFact for r in 1:NPaths) #1China.Shanghai
    pipeline[3].out == sum(Buy[c,2,r] for c in 1:NFact for r in 1:NPaths) #2Indonesia.tanjungpriok
    pipeline[4].out == sum(Buy[c,3,r] for c in 1:NFact for r in 1:NPaths) #3Japan.Tokyo
    pipeline[1].out == sum(Buy[c,4,r] for c in 1:NFact for r in 1:NPaths)+sum(Buy[c,7,r] for c in 1:NFact for r in 1:NPaths) #4Australia.Fremantle
    pipeline[5].out == sum(Buy[c,5,r] for c in 1:NFact for r in 1:NPaths)+sum(Buy[c,6,r] for c in 1:NFact for r in 1:NPaths) #5Russia.Murmansk.Panama Canal
    # pipeline[4].out == sum(Buy[c,6,r] for c in 1:NFact for r in 1:NPaths) #6canada	Vancouver
    # pipeline[4].out == sum(Buy[c,7,r] for c in 1:NFact for r in 1:NPaths) #7China Arctic

       end)
    @constraint(sp, [i=2:5], pipeline[i].out == pipeline[i-1].in)
    @stageobjective(sp,
                sum(1e6*pen_p[i] for i in 1:NFact)
                +sum(1e6*pen_n[i] for i in 1:NFact)
                +sum(1e6*pen1_p[i] for i in 1:NFact)
                +sum(1e6*pen1_n[i] for i in 1:NFact)
                +1e6*Epsilon[1].out

                +sum(Buy[c,m,r]*BUYCOST for c in 1:NFact for r in 1:NPaths for m in 1:NSource)/1000
                -sum(Sell[f,m,r]*Sel_price for f in 1:NFact for r in 1:NPaths for m in 1:NDest)/1000

        )

end
SDDP.numerical_stability_report(model)
SDDP.train(model, iteration_limit = 5, print_level = 2, log_frequency = 5)
simulations = SDDP.simulate(
    # The trained model to simulate.
    model,
    # The number of replications.
    20,
    # A list of names to record the values of.
    sampling_scheme = SDDP.InSampleMonteCarlo(
            terminate_on_cycle = false,
            terminate_on_dummy_leaf = true,
    ),)

@odow
Copy link
Owner

odow commented Aug 25, 2023

Thanks for spending time to debug it. You are honestly amazing.

No problem.

I'm still evolving the style of how I write models, but this is what I've been going for recently. I find it helpful to name every state variable with x_, every control variable with u_, every random variable with ω_, and penalty variables with Δ_.

I've also started using UPPERCASE for constants, although you could also use something like c_buy_cost.

using SDDP, Gurobi

T_MAX = 53
N_FACILITIES = 1
N_SOURCE = 7
N_PATHS = 1
N_DEST = 1
BUY_COST = 24_000
SELL_PRICE = 46_000
GAMMA = 75 / 1_000

model = SDDP.LinearPolicyGraph(
    stages = T_MAX,
    sense = :Min,
    lower_bound = -50.0,
    optimizer = Gurobi.Optimizer,    
) do sp, t
    @variables(sp, begin
        x_epsilon[1:N_DEST] >= 0, SDDP.State, (initial_value = 0)
        x_pipeline[1:5], SDDP.State, (initial_value = 0)
        x_demand_t, SDDP.State, (initial_value = 4552)
        u_production[1:N_FACILITIES] >= 0
        u_sell[1:N_FACILITIES, 1:N_DEST, 1:N_PATHS] >= 0
        u_buy[1:N_FACILITIES, 1:N_SOURCE, 1:N_PATHS] >= 0
        Δ_p[1:N_FACILITIES] >= 0
        Δ_n[1:N_FACILITIES] >= 0
        Δ1_p[1:N_FACILITIES] >= 0
        Δ1_n[1:N_FACILITIES] >= 0
        ω_ε
    end)
    @stageobjective(sp,
        1e6 * sum(Δ_p + Δ_n + Δ1_p + Δ1_n) +
        1e6 * x_epsilon[1].out +
        1e-3 * BUY_COST * sum(u_buy) -
        1e-3 * SELL_PRICE * sum(u_sell)
    )
    @constraints(sp,begin
        x_demand_t.out == x_demand_t.in + ω_ε 
        x_epsilon[1].out == x_demand_t.in - sum(u_sell[:,1,:]) + x_epsilon[1].in
        [c = 1:N_FACILITIES], u_production[c] == sum(u_sell[c,:,:]) + Δ_p[c] - Δ_n[c]
        [c = 1:N_FACILITIES], x_pipeline[5].in == GAMMA * u_production[c] + Δ1_p[c] - Δ1_n[c]
        [i = 2:5], x_pipeline[i].out == x_pipeline[i-1].in
        x_pipeline[1].out == sum(u_buy[:,4,:]) + sum(u_buy[:,7,:])
        x_pipeline[2].out == sum(u_buy[:,1,:])
        x_pipeline[3].out == sum(u_buy[:,2,:])
        x_pipeline[4].out == sum(u_buy[:,3,:])
        x_pipeline[5].out == sum(u_buy[:,5,:]) + sum(u_buy[:,6,:])
    end)
    Ω = [-312, 188, 688, 1188]
    P = [9, 12, 2, 1] / 24
    SDDP.parameterize(sp, Ω, P) do ω
        return JuMP.fix(ω_ε, ω)
    end
end
SDDP.train(model; iteration_limit = 5)
simulations = SDDP.simulate(model, 20)

@hghayoom
Copy link
Author

this style of writing code is very helpful. And I will start to follow it.
You're the best man.

@odow
Copy link
Owner

odow commented Aug 25, 2023

I'll point you (and future readers) to this section of the JuMP docs, which provides good advice for debugging unbounded models (as well as other failures): https://jump.dev/JuMP.jl/stable/tutorials/getting_started/debugging/#Debugging-an-unbounded-model

@odow
Copy link
Owner

odow commented Aug 26, 2023

Closing because this seems resolved. Feel free to re-open (or start a new thread) if you have other questions.

@odow odow closed this as completed Aug 26, 2023
@hghayoom
Copy link
Author

Hi again Oscar.
I am trying to save some of the variables in the simulation for example u_production[1]

simulations = SDDP.simulate(model, 20,[:u_production[1]])

I only can save these variables if I save them separately in a temp variable in the body of the model as a constraint.
for example

@variable(TempProd1>=0)
@constraint(TempProd1=u_production[1])

and then

simulations = SDDP.simulate(model, 20,[:TempProd1])

because I have a lot of variables and I want to check the accuracy of their number(especially use them for your publication graph) I have defined a looot of variables for each one.

I tried multiple things and I couldn't find a solution.
I appreciate it if you could help

Bests,
Hadi

@odow
Copy link
Owner

odow commented Aug 29, 2023

Do instead:

simulations = SDDP.simulate(model, 20, [:u_production])
simulations[1][1][:u_production][1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants