Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessing Optimal Decision #675

Closed
SolidAhmad opened this issue Sep 23, 2023 · 2 comments
Closed

Accessing Optimal Decision #675

SolidAhmad opened this issue Sep 23, 2023 · 2 comments

Comments

@SolidAhmad
Copy link

Once we have trained our model and convergence was achieved, we end up with a policy graph where each subproblem at each node contains all the cuts generated in the backward pass. However, we don't have access to the explicit variables that lead to the lower bound. Rather, we have to simulate to have an idea of how the variables interact with the policy graph. But I am only interested in the first stage optimal state variables, namely, The state variables that are used to calculate the lower bound in the last iteration, is there a way to access or calculate that directly as opposed to inferring those variables through simulations?

@odow
Copy link
Owner

odow commented Sep 23, 2023

You can get a decision rule for a node:

https://sddp.dev/stable/tutorial/first_steps/#Obtaining-the-decision-rule

If your first stage is deterministic, you can get the JuMP model from node 1 as follows:

sp = model[1].subprolem

But if your first stage is deterministic, then just do a single simulation and look at the values.

@SolidAhmad
Copy link
Author

But if your first stage is deterministic, then just do a single simulation and look at the values.

I get that you meant stochastic. That makes sense, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants