-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Markov State behavior #120
Comments
The If you want two Markov states in the first stage, then the first element is a For example, here is the transition matrix for a problem with two stages, and two Markov states within each stage. transition = Array{Float64, 2}[
[ 0.5 0.5 ],
[ 0.6 0.4 ; 0.4 0.6]
]
You have two options. The first programatically returns a list of STAGE, MARKOV = 1, 2
sp = SDDP.getsubproblem(m, STAGE, MARKOV)
oracle = SDDP.cutoracle(sp)
cuts = SDDP.validcuts(oracle) The second is to use the solve(m, cut_output_file = "mycuts.csv") This will produce a CSV file containing a list of all the cuts. The columns are |
Also, if you have some examples with multiple Markov states in the first stage, it would be great to include them in the library. I'm always looking for new models. |
I am trying to make a model with several Markov States in the first stage. I have taken into account the transition matrix. All matrices in the model are 5x5 so in the first stage there are 5 Markov states. The problem is that if in the SDDP algorithm I make N forward passes all its N associated scenarios start with the markov state in the initial stage in 1. But i want the initial markov state (first stage) to be drawn with uniform distribution within 5 possible states. When i make the model work ill include it in the library with pleasure. Thanks! |
You need something like this: transition = Array{Float64, 2}[]
push!(transition, [0.2 0.2 0.2 0.2 0.2])
for t in 2:T
push!(transition, [ ... the 5x5 matrix ... ])
end
m = SDDPModel(
markov_transition = transition
) do sp, t, i
end I should disable the ability to pass a single matrix, or provide a better constructor for this e.g., m = SDDPModel(
markov_transition = [ 5x5 matrix ],
initial_markov_probability = fill(0.2, 5)
) do sp, t, i
end |
Ah! Now i undertand that zeroth state. I was confused about that. Thanks! When i have some examples ill include it. |
Tutorial Four: Markovian policy graphs addresses the Markov chain aspect of this question. Please re-open this issue if anything is unclear! |
Hello,
when i use the initial distribution vector (in this case equally distributed among five states) the console output bound column report a bound that is incorrect. If you want to reproduce the bug you can change markov transition matrix definition in hydrovalley example changing initial vector, i use this matrix to capture the bug:
Thanks. |
From a quick peruse, the bug is likely with this line Line 252 in 21d2e5a
It probably needs to be something more sophisticated than a dot. This wasn't caught as none of these tests have a problem with multiple Markov states in the first stage. |
Hello,
When running SDDP algortihm with Markov State model in all forward passes the first stage state is 1. Is there a way to modify this behavior through a parameter?
When the model is resolved, i can plot valuefunction depending on markov state but i cant get explicitly the set of cuts associated with that state in that stage. Is there a way to obtain such cuts from framework? I didnt find a structure where this cuts are stored.
Thanks for your help,
Rodrigo
The text was updated successfully, but these errors were encountered: