Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question Regarding State-Dependent Paramether Updates in SDDP.jl #733

Closed
mcwaga opened this issue Feb 10, 2024 · 3 comments
Closed

Question Regarding State-Dependent Paramether Updates in SDDP.jl #733

mcwaga opened this issue Feb 10, 2024 · 3 comments

Comments

@mcwaga
Copy link

mcwaga commented Feb 10, 2024

Hello SDDP.jl Community,

I am currently delving into the potential of using SDDP.jl for a somewhat complex stochastic dynamic programming problem, particularly an infinite horizon model. My model requires updating a parameter K according to different rules based on the current economic state, with the specifics as follows:

During a recession, the update rule is K' = exp(a1) + K^(b1).
In a boom period, it changes to K' = exp(a2) + K^(b2).

A critical aspect of my approach is that if I have the full path of nodes (representing economic states) visited up to a given point, I can update K at any future stage using only this path and the initial value of K. This path-dependent nature means that knowing the sequence of states (boom or recession) is sufficient for projecting K into the future, without needing to know last stage K.

I'm reaching out to ask if SDDP.jl supports or can accommodate such state-dependent parameter update mechanisms, particularly where the choice of update rule requires knowledge of the path taken through the scenario tree. I'm not entirely sure if this is feasible or if I'm approaching the problem correctly within the context of SDDP.jl.

Thank you very much for your time and for supporting the SDDP.jl project. I'm looking forward to any insights you may have.

It is worth mentioning that another aspect of my model is the inclusion of other random factors, such as employment status, which can either be 0 (unemployed) or 1 (employed), adding another layer of stochasticity to the problem.

Best,

Mateus

@odow
Copy link
Owner

odow commented Feb 10, 2024

Can you model the boom/recession process by a Markov chain?

If so, build a Markovian policy graph (with cycle if infinite horizon):
https://sddp.dev/stable/tutorial/markov_uncertainty/

See also: https://onlinelibrary.wiley.com/doi/10.1002/net.21932

@mcwaga
Copy link
Author

mcwaga commented Feb 13, 2024

Sorry for the delay in responding. The Markov chain approach does not seem to work for me, since there is the boom/recession process plus the employed/unemployed process. Also, since I am trying to solve the Krusell Smith (https://www.journals.uchicago.edu/doi/abs/10.1086/250034) problem with SDDP, I would like to be as close as possible to their method, which includes the log update...

@odow
Copy link
Owner

odow commented Feb 13, 2024

I don't have access to that paper, unfortunately.

since there is the boom/recession process plus the employed/unemployed process.

You can have both a Markovian process for book/recession, and a stagewise-independent process for employment.

But hard to say without a proper formulation of the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants