-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparsity pattern #24
Comments
I agree that the ordering of the variables is of primal importance. |
I'm not sure about it in the new organization of control-toolbox but probably somewhere around this function: |
The variables' layout is defined here for the direct model: https://github.com/control-toolbox/CTDirect.jl/blob/main/src/problem.jl#L2 See also the following code, where the initial point is being set: https://github.com/control-toolbox/CTDirect.jl/blob/main/src/problem.jl#L205-L222 |
I think it is here: https://github.com/control-toolbox/CTDirect.jl/blob/main/src/problem.jl @PierreMartinon Right? |
Ici on peut voir comment récupérer le NLP : https://control-toolbox.org/OptimalControl.jl/stable/tutorial-nlp.html Je sais pas si on peut ensuite manipuler les variables et autres pour rérdonner. |
Hi everyone, Up to the current release (0.9.6), the layout of variables is as follows: Although if I understand correctly, what you have is more an issue of columns / rows for the state variables. I don't think you can alter the ordering once the NLP is built, but you can certainly reorder the initial guess. |
I am not surprised that the new ordering helps. We should also have a closer look at what's going on inside the linear solver, and look at the KKT system once permuted by the linear solver. It is well known that for optimal control problem, the KKT linear system is banded. However, this structure is not exploited when we use a generic linear solver. Long term, we should investigate if using a tailored solution (like Ricatti recursion) help in the solution. For the reference, HPIPM and Fatrop are using Ricatti recursions internally. |
Thanks @frapac for the feedback. A few things:
|
Note: we could have several variables layouts in Things may be more complicated if we provide sparsity information at some point. |
@PierreMartinon what do you mean by layout? ordering of the NLP variables? |
@0Yassine0 can you please re-run the benchmark and let us know about speed-up regarding ADNLModels last update? |
@jbcaillau |
thanks @0Yassine0
|
@jbcaillau |
very nice (and easier to read) 👍🏽 even better if you add tests for
|
I added some problems and N values (500, 1000). |
👍🏽 @0Yassine0 could be nice to add the number of iterations (same solver = Ipopt for both solves). might explain some differences. |
@jbcaillau |
thanks @0Yassine0 # of iterations confirm what I thought; does not explain why there are such differences, though. to be continued. |
hi @0Yassine0! assuming you're still around 🤞🏽, one last thing (apple mode off 🙂): could you please add the number of allocations / memory usage in the benchmark stats? will help to see what follows this upcoming update: control-toolbox/CTDirect.jl#188 |
Yess @jbcaillau , I'm always available! The results are in the same file |
@0Yassine0 thanks! yes, |
@amontoison @frapac @ocots @jbcaillau @PierreMartinon
There might be an issue due to the order in which we set up variables with
OC
.The sparsity pattern and x0 that we obtain with
OC
differ from those we get with JuMP.In this file, we can see that for the rocket problem, we have:
With JuMP :
x0 = [x1[1],x2[1],x3[1],x1[2],x2[2],x3[2],u,v]
With OC :
x0 = [x1[1],x1[2],x2[1],x2[2],x3[1],x3[2],u,v]
.The problem is that once the problem is defined, I don't think that we can change the order of the variables.
We can't inject JuMP's x0 into the nlp of OC because that would provide the wrong initial guess for the variables.
The text was updated successfully, but these errors were encountered: