From ff2f9afdc69ead1156234ff5aafe5149b7703f45 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 24 Dec 2023 01:35:33 +0000 Subject: [PATCH] build based on 80d680d --- dev/.documenter-siteinfo.json | 2 +- dev/index.html | 2 +- dev/lib/api/index.html | 2 +- dev/lib/evaluators/index.html | 2 +- dev/lib/kkt/index.html | 2 +- dev/lib/wrappers/index.html | 2 +- dev/man/fullspace/index.html | 4 +- dev/man/moi_wrapper/index.html | 4 +- dev/man/nlpmodel_wrapper/index.html | 2 +- dev/man/overview/index.html | 106 ++++++++++++++-------------- dev/man/reducedspace/index.html | 4 +- dev/optim/biegler/index.html | 6 +- dev/optim/fullspace/index.html | 10 +-- dev/optim/reducedspace/index.html | 10 +-- dev/quickstart/cpu/index.html | 24 +++---- dev/quickstart/cuda/index.html | 2 +- dev/references/index.html | 2 +- 17 files changed, 93 insertions(+), 93 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 30fbd73..e42b506 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-17T01:37:24","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-24T01:35:30","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index 5e7b509..cb0f29b 100644 --- a/dev/index.html +++ b/dev/index.html @@ -5,4 +5,4 @@ author={Pacaud, Fran{\c{c}}ois and Shin, Sungho and Schanen, Michel and Maldonado, Daniel Adrian and Anitescu, Mihai}, journal={arXiv preprint arXiv:2203.11875}, year={2022} -}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

+}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

diff --git a/dev/lib/api/index.html b/dev/lib/api/index.html index 01663c2..24ebb72 100644 --- a/dev/lib/api/index.html +++ b/dev/lib/api/index.html @@ -2,4 +2,4 @@ Evaluators API · Argos.jl

Evaluator API

Description

Argos.AbstractNLPEvaluatorType
AbstractNLPEvaluator

AbstractNLPEvaluator implements the bridge between the problem formulation (see ExaPF.AbstractFormulation) and the optimization solver. Once the problem formulation bridged, the evaluator allows to evaluate:

  • the objective;
  • the gradient of the objective;
  • the constraints;
  • the Jacobian of the constraints;
  • the Jacobian-vector and transpose-Jacobian vector products of the constraints;
  • the Hessian of the objective;
  • the Hessian of the Lagrangian.
source

API Reference

Optimization

Argos.optimize!Function
optimize!(optimizer, nlp::AbstractNLPEvaluator, x0)

Use optimization routine implemented in optimizer to optimize the optimal power flow problem specified in the evaluator nlp. Initial point is specified by x0.

Return the solution as a named tuple, with fields

  • status::MOI.TerminationStatus: Solver's termination status, as specified by MOI
  • minimum::Float64: final objective
  • minimizer::AbstractVector: final solution vector, with same ordering as the Variables specified in nlp.
optimize!(optimizer, nlp::AbstractNLPEvaluator)

Wrap previous optimize! function and pass as initial guess x0 the initial value specified when calling initial(nlp).

Examples

nlp = ExaPF.ReducedSpaceEvaluator(datafile)
 optimizer = Ipopt.Optimizer()
 solution = ExaPF.optimize!(optimizer, nlp)
-

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
+

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.VariablesType
Variables <: AbstractNLPAttribute end

Attribute corresponding to the optimization variables attached to a given AbstractNLPEvaluator.

source
Argos.ConstraintsType
Constraints <: AbstractNLPAttribute end

Attribute corresponding to the constraints attached to a given AbstractNLPEvaluator.

source
Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.n_constraintsFunction
n_constraints(nlp::AbstractNLPEvaluator)

Get the number of constraints in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
diff --git a/dev/lib/evaluators/index.html b/dev/lib/evaluators/index.html index ea36e58..cf2cedd 100644 --- a/dev/lib/evaluators/index.html +++ b/dev/lib/evaluators/index.html @@ -90,4 +90,4 @@ julia> @assert isa(x, Array) # x is defined on the host memory julia> Argos.objective(bdg, x) # evaluate the objective on the device -source +source diff --git a/dev/lib/kkt/index.html b/dev/lib/kkt/index.html index 2c425e2..a69649c 100644 --- a/dev/lib/kkt/index.html +++ b/dev/lib/kkt/index.html @@ -23,4 +23,4 @@ julia> kkt = Argos.MixedAuglagKKTSystem{T, VT, MT}(opf) julia> MadNLP.get_kkt(kkt) # return the matrix to factorize -

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source +

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source diff --git a/dev/lib/wrappers/index.html b/dev/lib/wrappers/index.html index 536ce78..e22a97b 100644 --- a/dev/lib/wrappers/index.html +++ b/dev/lib/wrappers/index.html @@ -9,4 +9,4 @@ julia> nlp = Argos.ReducedSpaceEvaluator(datafile); julia> ev = Argos.MOIEvaluator(nlp) -

Attributes

source +

Attributes

source diff --git a/dev/man/fullspace/index.html b/dev/man/fullspace/index.html index 19da66e..9a9389a 100644 --- a/dev/man/fullspace/index.html +++ b/dev/man/fullspace/index.html @@ -83,7 +83,7 @@ #lines: 9 giving a mathematical formulation with: #controls: 5 - #states : 14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) … Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,2.9707941074e-313,6.9344700776745e-310,2.4802815e-316), Dual{Nothing}(0.0,2.121995791e-314,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,0.0,6.9342578457163e-310,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,0.0,6.9344700452954e-310,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,2.121995791e-314,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,2.9707941074e-313,3.23791e-318,2.14329155e-316) … Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,0.0,6.9342578457163e-310,0.0), Dual{Nothing}(0.0,2.121995791e-314,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,NaN,0.0,0.0,0.0,0.0,0.0,6.9342578457163e-310,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,2.121995791e-314,0.0,0.0,0.0,0.0)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4 … 1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
+    #states  :   14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,6.92492106423607e-310,0.0,0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92492106423607e-310,0.0,0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92492106423607e-310,0.0,0.0,6.92492106423607e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92492106423607e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92492106423607e-310,0.0,0.0,6.92492106423607e-310,0.0), Dual{Nothing}(0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0,0.0,6.92513326381517e-310,0.0), Dual{Nothing}(0.0,6.92492106423607e-310,0.0,0.0,6.92492106423607e-310,0.0,0.0,6.92513326381517e-310,0.0)  …  Dual{Nothing}(0.0,1.412400398466485e-309,0.0,0.0,1.4286973261411e-309,0.0,0.0,1.44499425381571e-309,0.0), Dual{Nothing}(0.0,1.461291181490325e-309,0.0,0.0,1.47758810916494e-309,0.0,0.0,1.49388503683955e-309,0.0), Dual{Nothing}(0.0,1.510181964514165e-309,0.0,0.0,1.52647889218878e-309,0.0,0.0,1.54277581986339e-309,0.0), Dual{Nothing}(0.0,1.559072747538005e-309,0.0,0.0,1.57536967521262e-309,0.0,0.0,1.59166660288723e-309,0.0), Dual{Nothing}(0.0,1.607963530561845e-309,0.0,0.0,1.62426045823646e-309,0.0,0.0,1.64055738591107e-309,0.0), Dual{Nothing}(0.0,1.656854313585685e-309,0.0,0.0,1.6731512412603e-309,0.0,0.0,1.68944816893491e-309,0.0), Dual{Nothing}(0.0,1.705745096609524e-309,0.0,0.0,1.72204202428414e-309,0.0,0.0,1.73833895195875e-309,0.0), Dual{Nothing}(0.0,1.754635879633364e-309,0.0,0.0,1.77093280730798e-309,0.0,0.0,1.78722973498259e-309,0.0), Dual{Nothing}(0.0,1.803526662657204e-309,0.0,0.0,1.81982359033182e-309,0.0,0.0,1.83612051800643e-309,0.0), Dual{Nothing}(0.0,1.852417445681044e-309,0.0,0.0,1.868714373355657e-309,0.0,0.0,1.88501130103027e-309,0.0)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4  …  1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
 ⎡⠑⣤⡂⠡⣤⡂⠡⠌⠂⠀⎤
 ⎢⠌⡈⠻⣦⡈⠻⣦⠠⠁⠀⎥
 ⎢⠠⠻⣦⡈⠻⣦⡈⠁⠄⠀⎥
@@ -124,4 +124,4 @@
 ⎢⠠⠻⣦⡈⠳⣄⠀⠀⠀⠀⎥
 ⎢⡁⠆⠈⡛⠆⠈⡓⢄⠀⠀⎥
 ⎣⠈⠀⠁⠀⠀⠁⠀⠀⠑⠄⎦
Info

For the Hessian, only the lower-triangular are being returned.

Deport on CUDA GPU

Deporting all the operations on a CUDA GPU simply amounts to instantiating a FullSpaceEvaluator`](@ref) on the GPU, with

using CUDAKernels # suppose CUDAKernels has been downloaded
-flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

+flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

diff --git a/dev/man/moi_wrapper/index.html b/dev/man/moi_wrapper/index.html index d4e1269..38c0714 100644 --- a/dev/man/moi_wrapper/index.html +++ b/dev/man/moi_wrapper/index.html @@ -72,6 +72,6 @@ Number of equality constraint Jacobian evaluations = 16 Number of inequality constraint Jacobian evaluations = 16 Number of Lagrangian Hessian evaluations = 15 -Total seconds in IPOPT = 6.531 +Total seconds in IPOPT = 6.406 -EXIT: Optimal Solution Found. +EXIT: Optimal Solution Found. diff --git a/dev/man/nlpmodel_wrapper/index.html b/dev/man/nlpmodel_wrapper/index.html index cc0cf83..bf0c0b3 100644 --- a/dev/man/nlpmodel_wrapper/index.html +++ b/dev/man/nlpmodel_wrapper/index.html @@ -72,4 +72,4 @@ flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

The OPFModel structure works exclusively on the host memory, so we have to bridge the evaluator flp to the host before creating a new instance of OPFModel:

brige = Argos.bridge(flp)
 model = Argos.OPFModel(bridge)
-
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

+
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

diff --git a/dev/man/overview/index.html b/dev/man/overview/index.html index 0575c3d..76621c3 100644 --- a/dev/man/overview/index.html +++ b/dev/man/overview/index.html @@ -27,79 +27,79 @@ Argos.update!(flp, x) # The values in the cache are modified accordingly [stack.vmag stack.vang]
9×2 Matrix{Float64}:
- 0.824864  0.0
- 0.597683  0.661255
- 0.677269  0.912596
- 0.1131    0.980597
- 0.329173  0.68885
- 0.479255  0.0968965
- 0.689037  0.858252
- 0.518034  0.166835
- 0.22913   0.573335
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
3540.7385448185646

Gradient:

g = zeros(n)
+ 0.675777  0.0
+ 0.995348  0.439021
+ 0.390195  0.290283
+ 0.677346  0.249865
+ 0.440688  0.141989
+ 0.673406  0.820983
+ 0.316749  0.749417
+ 0.733515  0.662152
+ 0.833386  0.810269
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
4851.853612648864

Gradient:

g = zeros(n)
 Argos.gradient!(flp, g, x)
 g
19-element Vector{Float64}:
      0.0
      0.0
-  2217.806220181578
+ 29437.604186189536
      0.0
      0.0
      0.0
      0.0
      0.0
- 29274.314955343318
+ 11090.975251391226
      0.0
      0.0
      0.0
      0.0
      0.0
-  4013.9101421194637
+ 11116.72630725791
      0.0
      0.0
-  1154.8120757330257
-  1917.9642521063754

Constraints:

cons = zeros(m)
+  1036.560181979569
+   970.342380868799

Constraints:

cons = zeros(m)
 Argos.constraint!(flp, cons, x)
 cons
36-element Vector{Float64}:
-  1.7420299686315928
-  3.2914932127709307
-  1.5178932970881343
-  1.3916342689673125
- -6.6387074843972655
-  6.942103793545683
- -5.570273053031254
-  1.3833762844157609
- -1.0842599193742255
-  0.8180290605558087
+ -3.1241111856818713
+ -2.6247404883497976
+ -0.8906113088723977
+ -0.7255595476718651
+  4.023554613514544
+  0.786238162156776
+  2.185282095422288
+  5.672966255251793
+  2.014378647032725
+ -0.7319899142120052
   ⋮
-  2.272254736319481
-  0.6035399120301014
-  0.5224205279709425
- 16.284493549059025
- 10.231255769004376
-  9.852742553029293
-  7.361870203699213
-  0.20670567539589702
-  0.0312336079238553
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
+  3.9316857037552606
+  1.351040822515659
+  2.6131258760365514
+ 20.13751157568609
+  1.272236968781199
+ 17.623105793164537
+ 26.568505705539536
+  0.5220169230368751
+ 12.327989542913429
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
 c = Argos.constraint(flp, x)
36-element Vector{Float64}:
-  1.7420299686315928
-  3.2914932127709307
-  1.5178932970881343
-  1.3916342689673125
- -6.6387074843972655
-  6.942103793545683
- -5.570273053031254
-  1.3833762844157609
- -1.0842599193742255
-  0.8180290605558087
+ -3.1241111856818713
+ -2.6247404883497976
+ -0.8906113088723977
+ -0.7255595476718651
+  4.023554613514544
+  0.786238162156776
+  2.185282095422288
+  5.672966255251793
+  2.014378647032725
+ -0.7319899142120052
   ⋮
-  2.272254736319481
-  0.6035399120301014
-  0.5224205279709425
- 16.284493549059025
- 10.231255769004376
-  9.852742553029293
-  7.361870203699213
-  0.20670567539589702
-  0.0312336079238553

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
+  3.9316857037552606
+  1.351040822515659
+  2.6131258760365514
+ 20.13751157568609
+  1.272236968781199
+ 17.623105793164537
+ 26.568505705539536
+  0.5220169230368751
+ 12.327989542913429

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.0  0.0
  1.0  0.0
@@ -109,4 +109,4 @@
  1.0  0.0
  1.0  0.0
  1.0  0.0
- 1.0  0.0
+ 1.0 0.0 diff --git a/dev/man/reducedspace/index.html b/dev/man/reducedspace/index.html index 82852a4..22aebf8 100644 --- a/dev/man/reducedspace/index.html +++ b/dev/man/reducedspace/index.html @@ -94,7 +94,7 @@ * #iterations: 4 * Time Jacobian (s) ........: 0.0001 * Time linear solver (s) ...: 0.0001 - * Time total (s) ...........: 0.3778

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
+  * Time total (s) ...........: 0.3739

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.1       0.0478953
@@ -141,4 +141,4 @@
  -1573.41    -760.654    2476.81     -21.0085   -94.5838
    100.337    -60.9243    -21.0085  3922.1     2181.62
    105.971    -11.7018    -94.5838  2181.62    4668.9

As we will explain later, the computation of the reduced Jacobian and reduced Hessian can be streamlined on the GPU.

Deport on CUDA GPU

Instantiating a ReducedSpaceEvaluator on an NVIDIA GPU translates to:

using CUDAKernels # suppose CUDAKernels has been downloaded
-red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

+red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

diff --git a/dev/optim/biegler/index.html b/dev/optim/biegler/index.html index b3a2cef..ce67965 100644 --- a/dev/optim/biegler/index.html +++ b/dev/optim/biegler/index.html @@ -91,10 +91,10 @@ Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 9.343 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 9.258 Total wall-clock secs in linear solver = 0.023 Total wall-clock secs in NLP function evaluations = 0.001 -Total wall-clock secs = 9.367 +Total wall-clock secs = 9.282 EXIT: Optimal Solution Found. -"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

+"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

diff --git a/dev/optim/fullspace/index.html b/dev/optim/fullspace/index.html index c313c82..27a8148 100644 --- a/dev/optim/fullspace/index.html +++ b/dev/optim/fullspace/index.html @@ -80,10 +80,10 @@ Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.680 -Total wall-clock secs in linear solver = 0.406 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.608 +Total wall-clock secs in linear solver = 0.423 Total wall-clock secs in NLP function evaluations = 0.001 -Total wall-clock secs = 3.088 +Total wall-clock secs = 3.032 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."

Querying the solution

MadNLP returns a MadNLPExecutionStats object storing the solution. One can query the optimal objective as:

stats.objective
5296.6862028704

and the optimal solution:

stats.solution
41-element Vector{Float64}:
@@ -164,10 +164,10 @@
 Number of Lagrangian Hessian evaluations             = 5
 Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  0.007
 Total wall-clock secs in linear solver                      =  0.001
-Total wall-clock secs in NLP function evaluations           =  0.001
+Total wall-clock secs in NLP function evaluations           =  0.000
 Total wall-clock secs                                       =  0.008
 
 EXIT: Maximum Number of Iterations Exceeded.
 "Execution stats: Maximum Number of Iterations Exceeded."

Most importantly, one may want to use a different sparse linear solver than UMFPACK, employed by default in MadNLP. We recommend using HSL solvers (the installation procedure is detailed here). Once HSL is installed, one can solve the OPF with:

using MadNLPHSL
 solver = MadNLP.MadNLPSolver(model; linear_solver=Ma27Solver)
-MadNLP.solve!(solver)
+MadNLP.solve!(solver) diff --git a/dev/optim/reducedspace/index.html b/dev/optim/reducedspace/index.html index d0a0903..e78bb6f 100644 --- a/dev/optim/reducedspace/index.html +++ b/dev/optim/reducedspace/index.html @@ -141,10 +141,10 @@ Number of constraint evaluations = 24 Number of constraint Jacobian evaluations = 23 Number of Lagrangian Hessian evaluations = 22 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.069 -Total wall-clock secs in linear solver = 0.008 -Total wall-clock secs in NLP function evaluations = 0.401 -Total wall-clock secs = 6.478 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 5.900 +Total wall-clock secs in linear solver = 0.007 +Total wall-clock secs in NLP function evaluations = 0.346 +Total wall-clock secs = 6.253 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."
Info

We recommend changing the default tolerance to be above the tolerance of the Newton-Raphson used inside ReducedSpaceEvaluator. Indeed, the power flow is solved only approximately, leading to slightly inaccurate evaluations and derivatives, impacting the convergence of the interior-point algorithm. In general, we recommend setting tol=1e-5.

Info

Here, we are using Lapack on the CPU to solve the condensed KKT system at each iteration of the interior-point algorithm. However, if an NVIDIA GPU is available, we recommend using a CUDA-accelerated Lapack version, more efficient than the default Lapack. If MadNLPGPU is installed, this amounts to

using MadNLPGPU
@@ -184,4 +184,4 @@
  1.1       0.0105224
  1.08949  -0.0208788
  1.1       0.0158063
- 1.07176  -0.0805509
+ 1.07176 -0.0805509 diff --git a/dev/quickstart/cpu/index.html b/dev/quickstart/cpu/index.html index f1117a5..6ca54a5 100644 --- a/dev/quickstart/cpu/index.html +++ b/dev/quickstart/cpu/index.html @@ -52,10 +52,10 @@ Number of constraint evaluations = 21 Number of constraint Jacobian evaluations = 20 Number of Lagrangian Hessian evaluations = 19 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.633 -Total wall-clock secs in linear solver = 0.044 -Total wall-clock secs in NLP function evaluations = 4.261 -Total wall-clock secs = 6.938 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.636 +Total wall-clock secs in linear solver = 0.072 +Total wall-clock secs in NLP function evaluations = 4.182 +Total wall-clock secs = 6.890 EXIT: Optimal Solution Found.

Biegler's method (linearize-then-reduce)

Tip
  • Biegler's reduction condenses and reduces the KKT linear system to a dense linear system whose size is given by the number of degrees of freedom in the problem. We recommend factorizing the resulting system with the Cholesky factorization shipped with Lapack.
  • Note we obtain exactly the same convergence as with the previous FullSpace method, as the two methods are equivalent.
julia> Argos.run_opf(datafile, Argos.BieglerReduction(); lapack_algorithm=MadNLP.CHOLESKY);This is MadNLP version v0.7.0, running with Lapack-CPU (CHOLESKY)
 
@@ -109,10 +109,10 @@
 Number of constraint evaluations                     = 21
 Number of constraint Jacobian evaluations            = 20
 Number of Lagrangian Hessian evaluations             = 19
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  3.232
-Total wall-clock secs in linear solver                      =  0.016
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  3.239
+Total wall-clock secs in linear solver                      =  0.014
 Total wall-clock secs in NLP function evaluations           =  0.012
-Total wall-clock secs                                       =  3.261
+Total wall-clock secs                                       =  3.265
 
 EXIT: Optimal Solution Found.

Dommel & Tinney's method (reduce-then-linearize)

Tip
  • DommelTinney works in the reduced space, and the associated formulation has less variable than in the full space (107 versus 288).
  • The reduced Jacobian and reduced Hessian are dense, so DommelTinney can potentially eat a lot of memory on the largest instances.
  • As with BieglerReduction, we recommend using Lapack with a Cholesky factorization to solve the KKT system.
  • Note that we have to increase MadNLP's tolerance (parameter tol) as we cannot optimize below the tolerance of the Newton-Raphson employed under the hood (1e-10 by default).
julia> Argos.run_opf(datafile, Argos.DommelTinney(); tol=1e-5);This is MadNLP version v0.7.0, running with Lapack-CPU (CHOLESKY)
 
@@ -164,9 +164,9 @@
 Number of constraint evaluations                     = 19
 Number of constraint Jacobian evaluations            = 18
 Number of Lagrangian Hessian evaluations             = 17
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  4.461
-Total wall-clock secs in linear solver                      =  0.008
-Total wall-clock secs in NLP function evaluations           =  1.202
-Total wall-clock secs                                       =  5.671
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  4.445
+Total wall-clock secs in linear solver                      =  0.007
+Total wall-clock secs in NLP function evaluations           =  1.163
+Total wall-clock secs                                       =  5.615
 
-EXIT: Optimal Solution Found.
+EXIT: Optimal Solution Found. diff --git a/dev/quickstart/cuda/index.html b/dev/quickstart/cuda/index.html index bde87cc..8329156 100644 --- a/dev/quickstart/cuda/index.html +++ b/dev/quickstart/cuda/index.html @@ -6,4 +6,4 @@

Full-space method

ArgosCUDA.run_opf_gpu(datafile, Argos.FullSpace())
 

Biegler's method (linearize-then-reduce)

ArgosCUDA.run_opf_gpu(datafile, Argos.BieglerReduction(); linear_solver=LapackGPUSolver)
 

Dommel & Tinney's method (reduce-then-linearize)

ArgosCUDA.run_opf_gpu(datafile, Argos.DommelTinney(); linear_solver=LapackGPUSolver)
-
+ diff --git a/dev/references/index.html b/dev/references/index.html index 3a06aa9..c7a0781 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.

+References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.