Skip to content

Commit

Permalink
update documentation (#204)
Browse files Browse the repository at this point in the history
  • Loading branch information
frapac authored Aug 22, 2022
1 parent 87f4e0b commit e46b7c8
Show file tree
Hide file tree
Showing 5 changed files with 106 additions and 54 deletions.
5 changes: 4 additions & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
MadNLPTests = "b52a2a03-04ab-4a5f-9698-6a2deff93217"
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
4 changes: 4 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
using Documenter
using Pkg
using MadNLP

Pkg.develop(PackageSpec(path=joinpath(dirname(@__FILE__), "../lib/MadNLPTests/")))
Pkg.instantiate()

makedocs(
sitename = "MadNLP.jl",
format = Documenter.HTML(
Expand Down
97 changes: 55 additions & 42 deletions docs/src/man/kkt.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,20 @@
```@meta
CurrentModule = MadNLP
```
```@setup kkt_example
using SparseArrays
using NLPModels
using MadNLP
using MadNLPTests
nlp = MadNLPTests.HS15Model()
```

# KKT systems

KKT systems are linear system with a special KKT structure.
MadNLP uses a special structure [`AbstractKKTSystem`](@ref) to represent such
KKT system internally. The [`AbstractKKTSystem`](@ref) fulfills two
goals:
MadNLP uses a special structure [`AbstractKKTSystem`](@ref) to represent internally
the KKT system. The [`AbstractKKTSystem`](@ref) fulfills two goals:

1. Store the values of the Hessian of the Lagrangian and of the Jacobian.
2. Assemble the corresponding KKT matrix $$K$$.
Expand Down Expand Up @@ -97,38 +104,40 @@ by using the KKT system used by default inside MadNLP: [`SparseKKTSystem`](@ref)
### Initializing a KKT system

The size of the KKT system depends directly on the problem's characteristics
(number of variables, number of inequality constraints, ...).
(number of variables, number of of equality and inequality constraints).
A [`SparseKKTSystem`](@ref) stores the Hessian and the Jacobian in sparse
(COO) format. Depending on how we parameterize the system,
it can output either a sparse matrix or a dense matrix (according to the linear solver
we are employing under the hood).

For instance,
```julia
For instance, we can parameterize a sparse KKT system as
```@example kkt_example
T = Float64
VT = Vector{T}
MT = SparseMatrixCSC{T, Int}
kkt = MadNLP.SparseKKTSystem{T, MT}(nlp)
kkt = MadNLP.SparseKKTSystem{T, VT, MT}(nlp)
kkt.aug_com
```
parameterizes `kkt` to output a `SparseMatrixCSC`, whereas
```julia
and a dense KKT system as
```@example kkt_example
T = Float64
VT = Vector{T}
MT = Matrix{T}
kkt = MadNLP.SparseKKTSystem{T, MT}(nlp)
kkt = MadNLP.SparseKKTSystem{T, VT, MT}(nlp)
kkt.aug_com
```
output a dense matrix `Matrix{Float64}`.

Once the KKT system built, one has to initialize it:
```julia
MadNLP.initialize!(kkt)
Once the KKT system built, one has to initialize it
to use it inside the interior-point algorithm:
```@example kkt_example
MadNLP.initialize!(kkt);
```
Now, the KKT system `kkt` is ready for use inside the
interior-point algorithm.

The user can query the KKT matrix inside `kkt` by
```julia
The user can query the KKT matrix inside `kkt`, simply as
```@example kkt_example
kkt_matrix = MadNLP.get_kkt(kkt)
```
This returns a reference to the KKT matrix stores internally
Expand All @@ -141,64 +150,67 @@ We suppose now we want to refresh the values stored in the KKT system.


#### Updating the values of the Hessian
The Hessian values can be queried as
```julia
The Hessian part of the KKT system can be queried as
```@example kkt_example
hess_values = MadNLP.get_hessian(kkt)
```
For a `SparseKKTSystem`, `hess_values` is a `Vector{Float64}` storing
the nonzero values of the Hessian.
Then, using `NLPModels` one can update the values directly as
```julia
NLPModels.hess_coord!(nlp, x, l, hess)
Then, one can update the vector `hess_values` by using NLPModels.jl:
```@example kkt_example
n = NLPModels.get_nvar(nlp)
m = NLPModels.get_ncon(nlp)
x = NLPModels.get_x0(nlp)
l = zeros(m)
NLPModels.hess_coord!(nlp, x, l, hess_values)
```
with `x` and `l` the current primal and dual iterates.
A post-processing step is then applied:
```julia
Eventually, a post-processing step can be applied to refresh all the values internally:
```@example kkt_example
MadNLP.compress_hessian!(kkt)
```
!!! note
The post-processing should be applied only on the
values `hess_values`.
By default, the function [`compress_hessian!`](@ref) does nothing.
But it can be required for very specific use-case, for instance building internally a
Schur complement matrix.


#### Updating the values of the Jacobian
Updating the values of the Jacobian proceeds in a similar way.
One queries the values of the Jacobian to update as
```julia
We proceed exaclty the same way to update the values in the Jacobian.
One queries the Jacobian values in the KKT system as
```@example kkt_example
jac_values = MadNLP.get_jacobian(kkt)
```
Updates them with NLPModels as
```julia
We can refresh the values with NLPModels as
```@example kkt_example
NLPModels.jac_coord!(nlp, x, jac_values)
```
And then applies a post-processing step as
```julia
```@example kkt_example
MadNLP.compress_jacobian!(kkt)
```

!!! note
The post-processing should be applied only on the
values `jac_values`.
On the contrary to `compress_hessian!`, `compress_jacobian!` is not
empty by default. Instead, the post-processing step scales the values
of the Jacobian row by row, following the scaling of the constraints
doing nothing by default. Instead, the post-processing step scales the values
of the Jacobian row by row, applying the scaling of the constraints
as computed initially by MadNLP.

#### Updating the values of the diagonal matrices
Once the Hessian and the Jacobian updated, it remains
to udpate the values of the diagonal matrix $$\Sigma_x = X^{-1} V$$
and $$\Sigma_s = S^{-1} W$$. In the KKT's interface, this amounts
to call the [`regularize_diagonal!`](@ref) function:
```julia
```@example kkt_example
pr_values = ones(n + m)
du_values = zeros(m)
MadNLP.regularize_diagonal!(kkt, pr_values, du_values)
```
Expand All @@ -210,13 +222,14 @@ feasibility restoration).
### Assembling the KKT matrix
Once the values updated, one can assemble the resulting KKT matrix.
This translates to
```julia
```@example kkt_example
MadNLP.build_kkt!(kkt)
kkt_matrix
```
By doing so, the values stored inside `kkt` will be transferred
to the communicating matrix (as returned by the function [`get_kkt`](@ref)).
to the KKT matrix `kkt_matrix` (as returned by the function [`get_kkt`](@ref)).

In detail, a [`SparseKKTSystem`](@ref) stores internally the KKT system's values using
In details, a [`SparseKKTSystem`](@ref) stores internally the KKT system's values using
a sparse COO format. When [`build_kkt!`](@ref) is called, the sparse COO matrix
is transferred to `SparseMatrixCSC` (if `MT = SparseMatrixCSC`) or a `Matrix`
(if `MT = Matrix`), or any format suitable for factorizing the KKT system
Expand Down
52 changes: 41 additions & 11 deletions docs/src/man/linear_solvers.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,32 @@
```@meta
CurrentModule = MadNLP
```
```@setup linear_solver_example
using SparseArrays
using NLPModels
using MadNLP
using MadNLPTests
# Build nonlinear model
nlp = MadNLPTests.HS15Model()
# Build KKT
T = Float64
VT = Vector{T}
MT = Matrix{T}
kkt = MadNLP.SparseKKTSystem{T, VT, MT}(nlp)
n = NLPModels.get_nvar(nlp)
m = NLPModels.get_ncon(nlp)
x = NLPModels.get_x0(nlp)
l = zeros(m)
hess_values = MadNLP.get_hessian(kkt)
NLPModels.hess_coord!(nlp, x, l, hess_values)
jac_values = MadNLP.get_jacobian(kkt)
NLPModels.jac_coord!(nlp, x, jac_values)
MadNLP.compress_jacobian!(kkt)
MadNLP.build_kkt!(kkt)
```
# Linear solvers

Expand Down Expand Up @@ -53,32 +80,35 @@ of the matrix directly as a result of the factorization.
We suppose available a [`AbstractKKTSystem`](@ref) `kkt`, properly assembled
following the procedure presented [previously](kkt.md).
We can query the assembled matrix $$K$$ as
```julia
```@example linear_solver_example
K = MadNLP.get_kkt(kkt)
```
Then, we can create a new linear solver instance as
```julia
linear_solver = MadNLPUmfpack.Solver(K)
Then, if we want to pass the KKT matrix `K` to Lapack, this
translates to
```@example linear_solver_example
linear_solver = LapackCPUSolver(K)
```
The instance `linear_solver` does not copy the matrix $$K$$ and
instead keep a reference to it.
```julia
linear_solver.tril === K
```@example linear_solver_example
linear_solver.dense === K
```
That way every time we re-assemble the matrix $$K$$ in `kkt`,
the values are directly updated in `linear_solver`.
the values are directly updated inside `linear_solver`.

Then, to recompute the factorization inside `linear_solver`,
To compute the factorization inside `linear_solver`,
one simply as to call:
```julia
```@example linear_solver_example
MadNLP.factorize!(linear_solver)
```
Once the factorization computed, computing the backsolve
for a right-hand-side `b` simply amounts to
```julia
for a right-hand-side `b` amounts to
```@example linear_solver_example
nk = size(kkt, 1)
b = rand(nk)
MadNLP.solve!(linear_solver, b)
```
The values of `b` being modified inplace to store the solution $$x$$ of the linear
Expand Down
2 changes: 2 additions & 0 deletions lib/MadNLPTests/src/Instances/hs15.jl
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ function NLPModels.jac_coord!(nlp::HS15Model, x::AbstractVector, J::AbstractVect
J[2] = x[1] # (1, 2)
J[3] = 1.0 # (2, 1)
J[4] = 2*x[2] # (2, 2)
return J
end

function NLPModels.hess_structure!(nlp::HS15Model, I::AbstractVector{T}, J::AbstractVector{T}) where T
Expand All @@ -64,5 +65,6 @@ function NLPModels.hess_coord!(nlp::HS15Model, x, y, H::AbstractVector; obj_weig
H[2] += y[1] * 1.0
# Second constraint
H[3] += y[2] * 2.0
return H
end

0 comments on commit e46b7c8

Please sign in to comment.