diff --git a/dev/index.html b/dev/index.html index c35990f..898036b 100644 --- a/dev/index.html +++ b/dev/index.html @@ -5,4 +5,4 @@ author={Pacaud, Fran{\c{c}}ois and Shin, Sungho and Schanen, Michel and Maldonado, Daniel Adrian and Anitescu, Mihai}, journal={arXiv preprint arXiv:2203.11875}, year={2022} -}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

+}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

diff --git a/dev/lib/api/index.html b/dev/lib/api/index.html index 0a1d6bb..c9adfc6 100644 --- a/dev/lib/api/index.html +++ b/dev/lib/api/index.html @@ -2,4 +2,4 @@ Evaluators API · Argos.jl

Evaluator API

Description

Argos.AbstractNLPEvaluatorType
AbstractNLPEvaluator

AbstractNLPEvaluator implements the bridge between the problem formulation (see ExaPF.AbstractFormulation) and the optimization solver. Once the problem formulation bridged, the evaluator allows to evaluate:

  • the objective;
  • the gradient of the objective;
  • the constraints;
  • the Jacobian of the constraints;
  • the Jacobian-vector and transpose-Jacobian vector products of the constraints;
  • the Hessian of the objective;
  • the Hessian of the Lagrangian.
source

API Reference

Optimization

Argos.optimize!Function
optimize!(optimizer, nlp::AbstractNLPEvaluator, x0)

Use optimization routine implemented in optimizer to optimize the optimal power flow problem specified in the evaluator nlp. Initial point is specified by x0.

Return the solution as a named tuple, with fields

  • status::MOI.TerminationStatus: Solver's termination status, as specified by MOI
  • minimum::Float64: final objective
  • minimizer::AbstractVector: final solution vector, with same ordering as the Variables specified in nlp.
optimize!(optimizer, nlp::AbstractNLPEvaluator)

Wrap previous optimize! function and pass as initial guess x0 the initial value specified when calling initial(nlp).

Examples

nlp = ExaPF.ReducedSpaceEvaluator(datafile)
 optimizer = Ipopt.Optimizer()
 solution = ExaPF.optimize!(optimizer, nlp)
-

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
+

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.VariablesType
Variables <: AbstractNLPAttribute end

Attribute corresponding to the optimization variables attached to a given AbstractNLPEvaluator.

source
Argos.ConstraintsType
Constraints <: AbstractNLPAttribute end

Attribute corresponding to the constraints attached to a given AbstractNLPEvaluator.

source
Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.n_constraintsFunction
n_constraints(nlp::AbstractNLPEvaluator)

Get the number of constraints in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
diff --git a/dev/lib/evaluators/index.html b/dev/lib/evaluators/index.html index 79e1b54..793b2b4 100644 --- a/dev/lib/evaluators/index.html +++ b/dev/lib/evaluators/index.html @@ -88,4 +88,4 @@ julia> @assert isa(x, Array) # x is defined on the host memory julia> Argos.objective(bdg, x) # evaluate the objective on the device -source +source diff --git a/dev/lib/kkt/index.html b/dev/lib/kkt/index.html index 20fcaf2..286647a 100644 --- a/dev/lib/kkt/index.html +++ b/dev/lib/kkt/index.html @@ -23,4 +23,4 @@ julia> kkt = Argos.MixedAuglagKKTSystem{T, VT, MT}(opf) julia> MadNLP.get_kkt(kkt) # return the matrix to factorize -

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source +

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source diff --git a/dev/lib/wrappers/index.html b/dev/lib/wrappers/index.html index a472d9f..201194e 100644 --- a/dev/lib/wrappers/index.html +++ b/dev/lib/wrappers/index.html @@ -9,4 +9,4 @@ julia> nlp = Argos.ReducedSpaceEvaluator(datafile); julia> ev = Argos.MOIEvaluator(nlp) -

Attributes

source +

Attributes

source diff --git a/dev/man/fullspace/index.html b/dev/man/fullspace/index.html index 5db018d..b6bccda 100644 --- a/dev/man/fullspace/index.html +++ b/dev/man/fullspace/index.html @@ -83,7 +83,7 @@ #lines: 9 giving a mathematical formulation with: #controls: 5 - #states : 14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) … Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(6.94568804265354e-310,6.94559730252066e-310,6.9456948169212e-310,6.9456948168876e-310,6.9456948168876e-310,6.9455973025333e-310,6.94569481693385e-310,6.9455973029001e-310,6.94559128069126e-310), Dual{Nothing}(6.9456948168876e-310,6.9456948168876e-310,6.9455973025444e-310,6.9456948169528e-310,6.9456948168876e-310,6.9456948168876e-310,6.945597302557e-310,6.94569481696547e-310,6.9456948168876e-310), Dual{Nothing}(6.9456948168876e-310,6.94559730256967e-310,6.9456948169781e-310,6.9456948168876e-310,6.9456948168876e-310,6.94559730258074e-310,6.94569481699077e-310,6.9456948168876e-310,6.9456948168876e-310), Dual{Nothing}(6.94559730259655e-310,6.94559730259813e-310,6.94568804265354e-310,6.9455973025997e-310,6.9456948170129e-310,6.9456948168876e-310,6.9456948168876e-310,6.94559730261394e-310,6.94568804265354e-310), Dual{Nothing}(6.9455973026155e-310,6.94569481703187e-310,6.9456948168876e-310,6.9456948168876e-310,6.9455973026266e-310,6.9456948170445e-310,6.9456948168876e-310,6.9456948168876e-310,6.94559730264556e-310), Dual{Nothing}(6.94559730319417e-310,6.945591280896e-310,6.94559730264714e-310,6.94559128090707e-310,6.94559730323686e-310,6.9455973026503e-310,6.94559128092446e-310,6.94559128093e-310,6.9456948168876e-310), Dual{Nothing}(6.9456948168876e-310,6.94559730327955e-310,6.9455973027088e-310,6.945597303308e-310,6.9455973027104e-310,6.94559730271196e-310,6.94559730271354e-310,6.94568804265354e-310,6.9455973027151e-310), Dual{Nothing}(6.9455973027167e-310,6.94559730271987e-310,6.94559730272145e-310,6.94559730272303e-310,6.9455973027246e-310,6.9455973027262e-310,6.94559730272777e-310,6.94559730272935e-310,6.94559730273093e-310), Dual{Nothing}(6.9455973027325e-310,6.9455973027341e-310,6.9455973027357e-310,6.94559730273726e-310,6.94559730357836e-310,6.945687609607e-310,6.94559128437264e-310,6.94559730273884e-310,6.9455912805442e-310), Dual{Nothing}(6.9456948168876e-310,6.9455973027404e-310,6.945597302742e-310,6.9455973027436e-310,6.94559730274516e-310,6.9455912844335e-310,6.9456948168876e-310,6.9456948168876e-310,6.94559730276097e-310) … Dual{Nothing}(6.6992331e-316,6.420632e-317,2.1219957905e-314,NaN,8.7110224e-316,6.74119214e-316,9.1632817e-316,4.755133e-317,8.8e-322), Dual{Nothing}(6.69966454e-316,8.8e-322,NaN,9.16328153e-316,4.755133e-317,7.1e-322,8.5108673e-316,7.1e-322,NaN), Dual{Nothing}(8.9592933e-316,4.755133e-317,7.66e-322,6.7194161e-316,7.66e-322,NaN,6.69923293e-316,2.08e-322,1.556e-321), Dual{Nothing}(8.96115557e-316,1.556e-321,NaN,6.6992331e-316,4.755133e-317,1.55e-321,8.96115557e-316,1.55e-321,NaN), Dual{Nothing}(6.6992331e-316,4.755133e-317,1.107e-321,6.69966454e-316,1.107e-321,NaN,6.69923293e-316,1.117e-321,2.0e-323), Dual{Nothing}(6.69966454e-316,2.0e-323,NaN,NaN,8.7110528e-316,9.83e-322,6.69966454e-316,9.83e-322,NaN), Dual{Nothing}(6.6992332e-316,6.2506656e-317,1.166e-321,8.95278333e-316,1.166e-321,NaN,9.35431814e-316,4.755133e-317,2.905e-321), Dual{Nothing}(8.4548387e-316,2.905e-321,8.316758e-316,6.6992332e-316,4.755133e-317,1.02e-321,6.69966454e-316,1.02e-321,NaN), Dual{Nothing}(6.6992332e-316,4.755133e-317,2.1219957905e-314,NaN,8.7109884e-316,NaN,9.32193906e-316,4.755133e-317,NaN), Dual{Nothing}(NaN,8.71109195e-316,NaN,8.95929314e-316,1.527e-321,1.14e-321,6.69966454e-316,1.14e-321,9.1275134e-316)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4 … 1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3 … 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
+    #states  :   14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(6.90050949024426e-310,6.900509490249e-310,6.9005094902514e-310,6.90050949025375e-310,6.9005094902561e-310,6.9005094902585e-310,6.90050949026323e-310,6.9005094902656e-310,6.900509490268e-310), Dual{Nothing}(6.90050949027035e-310,6.9005094902751e-310,6.90050949027746e-310,6.9005094902822e-310,6.9005094902893e-310,6.90050949029406e-310,6.9005094903012e-310,6.9005094903083e-310,6.9005094903178e-310), Dual{Nothing}(6.9005094903249e-310,6.90050949032964e-310,6.900509490332e-310,6.9005094903344e-310,6.9005094903391e-310,6.90050949039367e-310,6.9005094903984e-310,6.9005094904008e-310,6.90050949040315e-310), Dual{Nothing}(6.9005094904079e-310,6.90050949041027e-310,6.9005094904174e-310,6.900509490434e-310,6.9005094904411e-310,6.90050949044347e-310,6.90050949044584e-310,6.9005094904506e-310,6.90050949045296e-310), Dual{Nothing}(6.90050949045533e-310,6.90050949046244e-310,6.9005094904648e-310,6.9005094904672e-310,6.90050949046956e-310,6.90050949047193e-310,6.9005094904743e-310,6.90050949047667e-310,6.90050949047904e-310), Dual{Nothing}(6.9005094904838e-310,6.90050949048616e-310,6.90050949048853e-310,6.9005094904909e-310,6.90050949049327e-310,6.90050949049564e-310,6.900509490498e-310,6.9005094905004e-310,6.90050949050276e-310), Dual{Nothing}(6.90050949050513e-310,6.90050949050987e-310,6.90050949051936e-310,6.90050949052173e-310,6.9005094905241e-310,6.9005094905312e-310,6.90050949053833e-310,6.9005094905407e-310,6.90050949054307e-310), Dual{Nothing}(6.90050949054544e-310,6.9005094905502e-310,6.90050949055256e-310,6.90050949055493e-310,6.9005094905573e-310,6.90050949055967e-310,6.9005094905644e-310,6.9005094905668e-310,6.90050949057153e-310), Dual{Nothing}(6.9005094905739e-310,6.9005119857319e-310,6.9005119858244e-310,6.90051198582913e-310,6.9005119858576e-310,6.90051198586233e-310,6.9005119858647e-310,6.90051198586707e-310,6.90051198586944e-310), Dual{Nothing}(6.9005119858718e-310,6.9005119858742e-310,6.90051198587656e-310,6.9005119858813e-310,6.90051198588367e-310,6.90051198588604e-310,6.9005119858884e-310,6.9005119858908e-310,6.90051198589316e-310)  …  Dual{Nothing}(6.900511988286e-310,6.9005119882884e-310,6.90051198829076e-310,6.90051198829313e-310,6.9005119882955e-310,6.90051198829787e-310,6.90051198830025e-310,6.900511988305e-310,6.90051198830973e-310), Dual{Nothing}(6.9005119883121e-310,6.90051198831447e-310,6.90051198831685e-310,6.9005119883192e-310,6.9005119883216e-310,6.9005119883975e-310,6.9005119884022e-310,6.90051198840696e-310,6.90051198840934e-310), Dual{Nothing}(6.9005119884141e-310,6.9005119890694e-310,6.90051198907415e-310,6.9005119890765e-310,6.9005119891263e-310,6.90051198916664e-310,6.9005119891714e-310,6.90051198917613e-310,6.90051198921407e-310), Dual{Nothing}(6.9005119892212e-310,6.9005119892686e-310,6.90051198927336e-310,6.90051198927573e-310,6.9005119892781e-310,6.90051198928047e-310,6.9005119892852e-310,6.90051198928996e-310,6.9005119892947e-310), Dual{Nothing}(6.90051198929707e-310,6.9005119893042e-310,6.90051198930893e-310,6.9005119893113e-310,6.90051198931367e-310,6.90051198931604e-310,6.9005119893184e-310,6.9005119893208e-310,6.90051198932553e-310), Dual{Nothing}(6.90051198933265e-310,6.9005119893374e-310,6.90051198933976e-310,6.90051198934213e-310,6.9005119893445e-310,6.90051198943225e-310,6.9005119894702e-310,6.90051198948205e-310,6.9005119895034e-310), Dual{Nothing}(6.90051198950577e-310,6.90051198950814e-310,6.9005119895129e-310,6.9005119895176e-310,6.90051198952237e-310,6.90051198952474e-310,6.9005119895271e-310,6.90051198953185e-310,6.9005119895366e-310), Dual{Nothing}(6.90051198953897e-310,6.90051198954134e-310,6.9005119895461e-310,6.90051198955082e-310,6.9005119895532e-310,6.90051198955794e-310,6.9005119895627e-310,6.90051198956743e-310,6.9005119895698e-310), Dual{Nothing}(6.90051198957217e-310,6.9005119895769e-310,6.9005119897429e-310,6.90051198974766e-310,6.9005119897524e-310,6.9005119913745e-310,6.9005119913769e-310,6.900511991384e-310,6.90051199138875e-310), Dual{Nothing}(6.9005119913935e-310,6.90051199139586e-310,6.90051199140535e-310,6.90051199141246e-310,6.9005119914172e-310,6.9005119914196e-310,6.90051199142195e-310,6.9005119914243e-310,6.9005119914267e-310)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4  …  1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3  …  16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
 ⠑⣤⡂⠡⣤⡂⠡⠌⠂⠀
 ⠌⡈⠻⣦⡈⠻⣦⠠⠁⠀
 ⠠⠻⣦⡈⠻⣦⡈⠁⠄⠀
@@ -124,4 +124,4 @@
 ⠠⠻⣦⡈⠳⣄⠀⠀⠀⠀
 ⡁⠆⠈⡛⠆⠈⡓⢄⠀⠀
 ⠈⠀⠁⠀⠀⠁⠀⠀⠑⠄
Info

For the Hessian, only the lower-triangular are being returned.

Deport on CUDA GPU

Deporting all the operations on a CUDA GPU simply amounts to instantiating a FullSpaceEvaluator`](@ref) on the GPU, with

using CUDAKernels # suppose CUDAKernels has been downloaded
-flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

+flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

diff --git a/dev/man/moi_wrapper/index.html b/dev/man/moi_wrapper/index.html index 122b756..1ec9159 100644 --- a/dev/man/moi_wrapper/index.html +++ b/dev/man/moi_wrapper/index.html @@ -72,6 +72,6 @@ Number of equality constraint Jacobian evaluations = 16 Number of inequality constraint Jacobian evaluations = 16 Number of Lagrangian Hessian evaluations = 15 -Total seconds in IPOPT = 4.933 +Total seconds in IPOPT = 5.031 -EXIT: Optimal Solution Found. +EXIT: Optimal Solution Found. diff --git a/dev/man/nlpmodel_wrapper/index.html b/dev/man/nlpmodel_wrapper/index.html index 23e4c8b..9be5d30 100644 --- a/dev/man/nlpmodel_wrapper/index.html +++ b/dev/man/nlpmodel_wrapper/index.html @@ -72,4 +72,4 @@ flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

The OPFModel structure works exclusively on the host memory, so we have to bridge the evaluator flp to the host before creating a new instance of OPFModel:

brige = Argos.bridge(flp)
 model = Argos.OPFModel(bridge)
-
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

+
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

diff --git a/dev/man/overview/index.html b/dev/man/overview/index.html index cefafa2..db8592d 100644 --- a/dev/man/overview/index.html +++ b/dev/man/overview/index.html @@ -27,58 +27,58 @@ Argos.update!(flp, x) # The values in the cache are modified accordingly [stack.vmag stack.vang]
9×2 Matrix{Float64}:
- 0.532358   0.0
- 0.128316   0.903884
- 0.0147266  0.652593
- 0.48141    0.893443
- 0.755269   0.0249339
- 0.619838   0.573397
- 0.424992   0.871584
- 0.214713   0.892209
- 0.810497   0.82632
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
13609.861851540123

Gradient:

g = zeros(n)
+ 0.602148  0.0
+ 0.030687  0.0885064
+ 0.121596  0.204083
+ 0.184726  0.53172
+ 0.400911  0.963381
+ 0.409487  0.357984
+ 0.494683  0.225945
+ 0.504263  0.815795
+ 0.927797  0.691407
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
2991.181912994495

Gradient:

g = zeros(n)
 Argos.gradient!(flp, g, x)
 g
19-element Vector{Float64}:
-     0.0
-     0.0
- 19875.68467804078
-     0.0
-     0.0
-     0.0
-     0.0
-     0.0
- 51332.45842284807
-     0.0
-     0.0
-     0.0
-     0.0
-     0.0
- 46419.79417530555
-     0.0
-     0.0
-  1556.6910075583112
-  1270.0635413394812

Constraints:

cons = zeros(m)
+    0.0
+    0.0
+ 2753.16059547987
+    0.0
+    0.0
+    0.0
+    0.0
+    0.0
+ 8766.988157741771
+    0.0
+    0.0
+    0.0
+    0.0
+    0.0
+ 2689.52613656893
+    0.0
+    0.0
+  660.6794931142296
+ 2444.100660827075

Constraints:

cons = zeros(m)
 Argos.constraint!(flp, cons, x)
 cons
36-element Vector{Float64}:
- -0.8399657498002915
- -0.46525338993797943
-  6.46693816077277
- -2.510388702717156
-  0.7277738782917896
-  1.7937674209739365
- -0.13536044530405056
-  1.8171047497340882
- -1.2394388416195632
-  5.487115512047321
+ -0.4826555986962292
+ -1.0870291479986496
+  0.08842012169075877
+  1.9622790175246525
+ -0.12725276904065547
+ -1.050354227472749
+  2.238701491430483
+  2.6345393844448353
+ -2.8962617737175878
+  1.2047816558541862
   ⋮
- 13.545859368333002
- 21.301206636027576
-  2.0255361905809988
- 40.97325403696394
-  1.1032347597100554
-  0.39244167792951595
-  0.031478719069422505
-  8.099064758488783
-  3.5584934839766302
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
+  2.1080043950029506
+  1.0844248781368802
+  0.30479098870253846
+  4.104529321719893
+  0.23328135289615815
+  4.067176255928233
+  0.055954312020624494
+  5.336644360829255
+  2.601744934255414
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
 c = Argos.constraint(flp, x)

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.0  0.0
@@ -89,4 +89,4 @@
  1.0  0.0
  1.0  0.0
  1.0  0.0
- 1.0  0.0
+ 1.0 0.0 diff --git a/dev/man/reducedspace/index.html b/dev/man/reducedspace/index.html index 718370b..9add43b 100644 --- a/dev/man/reducedspace/index.html +++ b/dev/man/reducedspace/index.html @@ -92,9 +92,9 @@ #it 4: 1.07127e-11 Power flow has converged: true * #iterations: 4 - * Time Jacobian (s) ........: 0.0002 + * Time Jacobian (s) ........: 0.0003 * Time linear solver (s) ...: 0.0001 - * Time total (s) ...........: 0.6524

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
+  * Time total (s) ...........: 0.6501

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.1       0.0478953
@@ -141,4 +141,4 @@
  -1573.41    -760.654    2476.81     -21.0085   -94.5838
    100.337    -60.9243    -21.0085  3922.1     2181.62
    105.971    -11.7018    -94.5838  2181.62    4668.9

As we will explain later, the computation of the reduced Jacobian and reduced Hessian can be streamlined on the GPU.

Deport on CUDA GPU

Instantiating a ReducedSpaceEvaluator on an NVIDIA GPU translates to:

using CUDAKernels # suppose CUDAKernels has been downloaded
-red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

+red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

diff --git a/dev/optim/biegler/index.html b/dev/optim/biegler/index.html index d87aa6e..479f5ec 100644 --- a/dev/optim/biegler/index.html +++ b/dev/optim/biegler/index.html @@ -91,10 +91,10 @@ Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 14.238 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 14.811 Total wall-clock secs in linear solver = 0.028 -Total wall-clock secs in NLP function evaluations = 0.003 -Total wall-clock secs = 14.269 +Total wall-clock secs in NLP function evaluations = 0.004 +Total wall-clock secs = 14.843 EXIT: Optimal Solution Found. -"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

+"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

diff --git a/dev/optim/fullspace/index.html b/dev/optim/fullspace/index.html index 8ed41f8..28a78d7 100644 --- a/dev/optim/fullspace/index.html +++ b/dev/optim/fullspace/index.html @@ -64,49 +64,49 @@ 11 5.2966860e+03 7.29e-06 2.10e-04 -5.7 2.78e-03 - 1.00e+00 1.00e+00h 1 12 5.2966867e+03 2.58e-07 7.50e-06 -5.7 5.23e-04 - 1.00e+00 1.00e+00h 1 13 5.2966862e+03 1.20e-08 5.67e-07 -8.6 1.14e-04 - 1.00e+00 1.00e+00h 1 - 14 5.2966862e+03 1.18e-12 3.33e-11 -8.6 1.12e-06 - 1.00e+00 1.00e+00h 1 + 14 5.2966862e+03 1.18e-12 3.35e-11 -8.6 1.12e-06 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 14 (scaled) (unscaled) -Objective...............: 6.1017825057066908e+01 5.2966862028703908e+03 -Dual infeasibility......: 3.3310243452433497e-11 2.8915141885792965e-09 -Constraint violation....: 1.1763923168928159e-12 1.1763923168928159e-12 -Complementarity.........: 2.8885453188278481e-11 2.5074178114825070e-09 -Overall NLP error.......: 2.5074178114825070e-09 2.5074178114825070e-09 +Objective...............: 6.1017825057066936e+01 5.2966862028703936e+03 +Dual infeasibility......: 3.3537617127876729e-11 2.9112514867948547e-09 +Constraint violation....: 1.1766143614977409e-12 1.1766143614977409e-12 +Complementarity.........: 2.8885453188270749e-11 2.5074178114818357e-09 +Overall NLP error.......: 2.5074178114818357e-09 2.5074178114818357e-09 Number of objective function evaluations = 16 Number of objective gradient evaluations = 15 Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.273 -Total wall-clock secs in linear solver = 0.403 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.197 +Total wall-clock secs in linear solver = 0.474 Total wall-clock secs in NLP function evaluations = 0.003 -Total wall-clock secs = 3.679 +Total wall-clock secs = 3.674 EXIT: Optimal Solution Found. -"Execution stats: Optimal Solution Found."

Querying the solution

MadNLP returns a MadNLPExecutionStats object storing the solution. One can query the optimal objective as:

stats.objective
5296.686202870391

and the optimal solution:

stats.solution
41-element Vector{Float64}:
-  0.08541019351901051
-  0.05671519595851566
- -0.042986135316815235
- -0.06949870510848587
-  0.010522396312977493
- -0.020878890700930623
-  0.015806276659318587
- -0.08055111183511199
-  1.0942215071535502
+"Execution stats: Optimal Solution Found."

Querying the solution

MadNLP returns a MadNLPExecutionStats object storing the solution. One can query the optimal objective as:

stats.objective
5296.686202870394

and the optimal solution:

stats.solution
41-element Vector{Float64}:
+  0.0854101935190104
+  0.05671519595851568
+ -0.04298613531681527
+ -0.06949870510848588
+  0.010522396312977496
+ -0.020878890700930703
+  0.01580627665931845
+ -0.0805511118351121
+  1.09422150715355
   1.0844484919148973
   ⋮
-  0.8145655559162711
-  0.14205697247988713
+  0.8145655559162726
+  0.1420569724798875
   0.36249665909346485
-  0.9616074529857054
-  0.1798340559090117
-  0.3870702737199754
-  1.8042024795633287
-  0.5359062432430304
-  0.31460681774989735

Also, remind that each time the callback update! is being called, the values are updated internally in the stack stored inside flp. Hence, an alternative way to query the solution is to directly have a look at the values in the stack. For instance, one can query the optimal values of the voltage

stack = flp.stack
+  0.9616074529857063
+  0.1798340559090125
+  0.3870702737199745
+  1.8042024795633291
+  0.5359062432430292
+  0.3146068177498973

Also, remind that each time the callback update! is being called, the values are updated internally in the stack stored inside flp. Hence, an alternative way to query the solution is to directly have a look at the values in the stack. For instance, one can query the optimal values of the voltage

stack = flp.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.09735   0.0854102
@@ -117,9 +117,9 @@
  1.08949  -0.0208789
  1.1       0.0158063
  1.07176  -0.0805511

and of the power generation:

stack.pgen
3-element Vector{Float64}:
- 0.8979870769892051
- 1.3432060073263448
- 0.9418738041880936
Info

The values inside stack are used to compute the initial point in the optimization routine. Hence, if one calls solve! again the optimization would start from the optimal solution found in the previous call to solve!, leading to a different convergence pattern. If one wants to launch a new optimization from scratch without reinitializing all the data structure, we recommend using the reset! function:

Argos.reset!(flp)

Playing with different parameters

MadNLP has different options we may want to tune when solving the OPF. For instance, we can loosen the tolerance to 1e-5 and set the maximum number of iterations to 5 with:

julia> solver = MadNLP.MadNLPSolver(model; tol=1e-5, max_iter=5)Interior point solver
+ 0.8979870769892057
+ 1.343206007326345
+ 0.9418738041880941
Info

The values inside stack are used to compute the initial point in the optimization routine. Hence, if one calls solve! again the optimization would start from the optimal solution found in the previous call to solve!, leading to a different convergence pattern. If one wants to launch a new optimization from scratch without reinitializing all the data structure, we recommend using the reset! function:

Argos.reset!(flp)

Playing with different parameters

MadNLP has different options we may want to tune when solving the OPF. For instance, we can loosen the tolerance to 1e-5 and set the maximum number of iterations to 5 with:

julia> solver = MadNLP.MadNLPSolver(model; tol=1e-5, max_iter=5)Interior point solver
 
 number of variables......................: 19
 number of constraints....................: 36
@@ -165,9 +165,9 @@
 Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  0.008
 Total wall-clock secs in linear solver                      =  0.001
 Total wall-clock secs in NLP function evaluations           =  0.001
-Total wall-clock secs                                       =  0.010
+Total wall-clock secs                                       =  0.011
 
 EXIT: Maximum Number of Iterations Exceeded.
 "Execution stats: Maximum Number of Iterations Exceeded."

Most importantly, one may want to use a different sparse linear solver than UMFPACK, employed by default in MadNLP. We recommend using HSL solvers (the installation procedure is detailed here). Once HSL is installed, one can solve the OPF with:

using MadNLPHSL
 solver = MadNLP.MadNLPSolver(model; linear_solver=Ma27Solver)
-MadNLP.solve!(solver)
+MadNLP.solve!(solver) diff --git a/dev/optim/reducedspace/index.html b/dev/optim/reducedspace/index.html index d0ee452..6847ece 100644 --- a/dev/optim/reducedspace/index.html +++ b/dev/optim/reducedspace/index.html @@ -141,10 +141,10 @@ Number of constraint evaluations = 32 Number of constraint Jacobian evaluations = 23 Number of Lagrangian Hessian evaluations = 22 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 7.564 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 7.883 Total wall-clock secs in linear solver = 0.009 -Total wall-clock secs in NLP function evaluations = 0.363 -Total wall-clock secs = 7.935 +Total wall-clock secs in NLP function evaluations = 0.374 +Total wall-clock secs = 8.266 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."
Info

We recommend changing the default tolerance to be above the tolerance of the Newton-Raphson used inside ReducedSpaceEvaluator. Indeed, the power flow is solved only approximately, leading to slightly inaccurate evaluations and derivatives, impacting the convergence of the interior-point algorithm. In general, we recommend setting tol=1e-5.

Info

Here, we are using Lapack on the CPU to solve the condensed KKT system at each iteration of the interior-point algorithm. However, if an NVIDIA GPU is available, we recommend using a CUDA-accelerated Lapack version, more efficient than the default Lapack. If MadNLPGPU is installed, this amounts to

using MadNLPGPU
@@ -184,4 +184,4 @@
  1.1       0.0105224
  1.08949  -0.0208788
  1.1       0.0158063
- 1.07176  -0.0805509
+ 1.07176 -0.0805509 diff --git a/dev/quickstart/cpu/index.html b/dev/quickstart/cpu/index.html index 6a19b6e..7c06f10 100644 --- a/dev/quickstart/cpu/index.html +++ b/dev/quickstart/cpu/index.html @@ -52,10 +52,10 @@ Number of constraint evaluations = 21 Number of constraint Jacobian evaluations = 20 Number of Lagrangian Hessian evaluations = 19 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.696 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.805 Total wall-clock secs in linear solver = 0.056 -Total wall-clock secs in NLP function evaluations = 4.233 -Total wall-clock secs = 6.985 +Total wall-clock secs in NLP function evaluations = 4.289 +Total wall-clock secs = 7.149 EXIT: Optimal Solution Found.

Biegler's method (linearize-then-reduce)

Tip
julia> Argos.run_opf(datafile, Argos.BieglerReduction(); lapack_algorithm=MadNLP.CHOLESKY);This is MadNLP version v0.5.2, running with Lapack-CPU (CHOLESKY)
 
@@ -109,10 +109,10 @@
 Number of constraint evaluations                     = 21
 Number of constraint Jacobian evaluations            = 20
 Number of Lagrangian Hessian evaluations             = 19
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  3.785
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  3.922
 Total wall-clock secs in linear solver                      =  0.018
-Total wall-clock secs in NLP function evaluations           =  0.023
-Total wall-clock secs                                       =  3.826
+Total wall-clock secs in NLP function evaluations           =  0.021
+Total wall-clock secs                                       =  3.960
 
 EXIT: Optimal Solution Found.

Dommel & Tinney's method (reduce-then-linearize)

Tip
julia> Argos.run_opf(datafile, Argos.DommelTinney(); tol=1e-5);This is MadNLP version v0.5.2, running with Lapack-CPU (CHOLESKY)
 
@@ -164,9 +164,9 @@
 Number of constraint evaluations                     = 19
 Number of constraint Jacobian evaluations            = 18
 Number of Lagrangian Hessian evaluations             = 17
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  5.055
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  5.301
 Total wall-clock secs in linear solver                      =  0.009
-Total wall-clock secs in NLP function evaluations           =  1.274
-Total wall-clock secs                                       =  6.339
+Total wall-clock secs in NLP function evaluations           =  1.306
+Total wall-clock secs                                       =  6.616
 
-EXIT: Optimal Solution Found.
+EXIT: Optimal Solution Found. diff --git a/dev/quickstart/cuda/index.html b/dev/quickstart/cuda/index.html index 56ff501..cc32958 100644 --- a/dev/quickstart/cuda/index.html +++ b/dev/quickstart/cuda/index.html @@ -6,4 +6,4 @@

Full-space method

ArgosCUDA.run_opf_gpu(datafile, Argos.FullSpace())
 

Biegler's method (linearize-then-reduce)

ArgosCUDA.run_opf_gpu(datafile, Argos.BieglerReduction(); linear_solver=LapackGPUSolver)
 

Dommel & Tinney's method (reduce-then-linearize)

ArgosCUDA.run_opf_gpu(datafile, Argos.DommelTinney(); linear_solver=LapackGPUSolver)
-
+ diff --git a/dev/references/index.html b/dev/references/index.html index 5133a19..516c15b 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.

+References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.

diff --git a/dev/search/index.html b/dev/search/index.html index 40e84b3..0170a31 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · Argos.jl

Loading search...

    +Search · Argos.jl

    Loading search...