diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index ed4138d..63bbaab 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-09-22T02:05:14","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-09-29T02:06:22","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index b133e1a..e6428e9 100644 --- a/dev/index.html +++ b/dev/index.html @@ -5,4 +5,4 @@ author={Pacaud, Fran{\c{c}}ois and Shin, Sungho and Schanen, Michel and Maldonado, Daniel Adrian and Anitescu, Mihai}, journal={arXiv preprint arXiv:2203.11875}, year={2022} -}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

+}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

diff --git a/dev/lib/api/index.html b/dev/lib/api/index.html index 9d712f5..6b18b2e 100644 --- a/dev/lib/api/index.html +++ b/dev/lib/api/index.html @@ -2,4 +2,4 @@ Evaluators API · Argos.jl

Evaluator API

Description

Argos.AbstractNLPEvaluatorType
AbstractNLPEvaluator

AbstractNLPEvaluator implements the bridge between the problem formulation (see ExaPF.AbstractFormulation) and the optimization solver. Once the problem formulation bridged, the evaluator allows to evaluate:

  • the objective;
  • the gradient of the objective;
  • the constraints;
  • the Jacobian of the constraints;
  • the Jacobian-vector and transpose-Jacobian vector products of the constraints;
  • the Hessian of the objective;
  • the Hessian of the Lagrangian.
source

API Reference

Optimization

Argos.optimize!Function
optimize!(optimizer, nlp::AbstractNLPEvaluator, x0)

Use optimization routine implemented in optimizer to optimize the optimal power flow problem specified in the evaluator nlp. Initial point is specified by x0.

Return the solution as a named tuple, with fields

  • status::MOI.TerminationStatus: Solver's termination status, as specified by MOI
  • minimum::Float64: final objective
  • minimizer::AbstractVector: final solution vector, with same ordering as the Variables specified in nlp.
optimize!(optimizer, nlp::AbstractNLPEvaluator)

Wrap previous optimize! function and pass as initial guess x0 the initial value specified when calling initial(nlp).

Examples

nlp = ExaPF.ReducedSpaceEvaluator(datafile)
 optimizer = Ipopt.Optimizer()
 solution = ExaPF.optimize!(optimizer, nlp)
-

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
+

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.VariablesType
Variables <: AbstractNLPAttribute end

Attribute corresponding to the optimization variables attached to a given AbstractNLPEvaluator.

source
Argos.ConstraintsType
Constraints <: AbstractNLPAttribute end

Attribute corresponding to the constraints attached to a given AbstractNLPEvaluator.

source
Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.n_constraintsFunction
n_constraints(nlp::AbstractNLPEvaluator)

Get the number of constraints in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
diff --git a/dev/lib/evaluators/index.html b/dev/lib/evaluators/index.html index bb8fe22..f7f0c91 100644 --- a/dev/lib/evaluators/index.html +++ b/dev/lib/evaluators/index.html @@ -90,4 +90,4 @@ julia> @assert isa(x, Array) # x is defined on the host memory julia> Argos.objective(bdg, x) # evaluate the objective on the device -source +source diff --git a/dev/lib/kkt/index.html b/dev/lib/kkt/index.html index be0887a..f7dfda2 100644 --- a/dev/lib/kkt/index.html +++ b/dev/lib/kkt/index.html @@ -10,4 +10,4 @@ julia> kkt = Argos.BieglerKKTSystem{T, VI, VT, MT}(opf) julia> MadNLP.get_kkt(kkt) # return the matrix to factorize -

Notes

BieglerKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA. When instantiated on the GPU, BieglerKKTSystem uses cusolverRF to streamline the solution of the sparse linear systems in the reduction algorithm.

References

[BNS2015] Biegler, Lorenz T., Jorge Nocedal, and Claudia Schmid. "A reduced Hessian method for large-scale constrained optimization." SIAM Journal on Optimization 5, no. 2 (1995): 314-347.

[PSSMA2022] Pacaud, François, Sungho Shin, Michel Schanen, Daniel Adrian Maldonado, and Mihai Anitescu. "Condensed interior-point methods: porting reduced-space approaches on GPU hardware." arXiv preprint arXiv:2203.11875 (2022).

source +

Notes

BieglerKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA. When instantiated on the GPU, BieglerKKTSystem uses cusolverRF to streamline the solution of the sparse linear systems in the reduction algorithm.

References

[BNS2015] Biegler, Lorenz T., Jorge Nocedal, and Claudia Schmid. "A reduced Hessian method for large-scale constrained optimization." SIAM Journal on Optimization 5, no. 2 (1995): 314-347.

[PSSMA2022] Pacaud, François, Sungho Shin, Michel Schanen, Daniel Adrian Maldonado, and Mihai Anitescu. "Condensed interior-point methods: porting reduced-space approaches on GPU hardware." arXiv preprint arXiv:2203.11875 (2022).

source diff --git a/dev/lib/wrappers/index.html b/dev/lib/wrappers/index.html index 72e0b57..de148f4 100644 --- a/dev/lib/wrappers/index.html +++ b/dev/lib/wrappers/index.html @@ -9,4 +9,4 @@ julia> nlp = Argos.ReducedSpaceEvaluator(datafile); julia> ev = Argos.MOIEvaluator(nlp) -

Attributes

source +

Attributes

source diff --git a/dev/man/fullspace/index.html b/dev/man/fullspace/index.html index 09d6e59..6aae85a 100644 --- a/dev/man/fullspace/index.html +++ b/dev/man/fullspace/index.html @@ -83,7 +83,7 @@ #lines: 9 giving a mathematical formulation with: #controls: 5 - #states : 14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) … Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(3.1829936872e-313,3.39519326633e-313,7.0025861117e-313,7.21478569096e-313,8.06358400745e-313,8.4879831658e-313,3.81959242463e-313,8.4879831653e-314,8.487983166e-314), Dual{Nothing}(1.06099789573e-313,9.1245819014e-313,8.9123823242e-313,1.06099789573e-313,1.06099789573e-313,9.3367814824e-313,1.06099789766e-313,1.2731974749e-313,1.485397054e-313), Dual{Nothing}(9.5489810597e-313,1.697596635e-313,9.97338021793e-313,2.3341953724e-313,1.9097962123e-313,1.188317642985e-312,2.143215749157e-312,2.22809558106e-312,2.39785524431e-312), Dual{Nothing}(2.270535496857e-312,2.440295160173e-312,2.525174991846e-312,2.525174991837e-312,2.482735076007e-312,2.4615151181e-312,2.928354192155e-312,3.16177372927e-312,3.649832761266e-312), Dual{Nothing}(3.777152508773e-312,3.925692214175e-312,3.968132130024e-312,2.716154613364e-312,2.716154613073e-312,1.018557980296e-312,4.01057204546e-312,4.15911175123e-312,4.11667183543e-312), Dual{Nothing}(4.07423191961e-312,4.0742319196e-312,4.265211540784e-312,4.732050614905e-312,5.050349983604e-312,5.262549562814e-312,5.41108926821e-312,5.538409015704e-312,5.580848931533e-312), Dual{Nothing}(4.32887141457e-312,4.30765145666e-312,5.602068888385e-312,2.249315538967e-312,2.18565566522e-312,1.14587772843e-312,2.12199579146e-313,5.665728763196e-312,2.12199579146e-313), Dual{Nothing}(5.72938863694e-312,6.090127921404e-312,6.28110754271e-312,6.15378779523e-312,6.196227711086e-312,6.00524808983e-312,6.02646804774e-312,6.34476741639e-312,6.4084272902e-312), Dual{Nothing}(6.450867206027e-312,6.514527079775e-312,6.06890796368e-312,5.750608594855e-312,5.77182855277e-312,5.835488426513e-312,6.64184682708e-312,5.835488426513e-312,5.835488426513e-312), Dual{Nothing}(5.83548842671e-312,5.835488426513e-312,5.92036825836e-312,2.1219958048e-313,6.0e-323,6.9379908223486e-310,1.0e-323,0.0,NaN) … Dual{Nothing}(1.0e-323,0.0,NaN,0.0,0.0,6.9379908223486e-310,1.0e-323,0.0,NaN), Dual{Nothing}(0.0,3.5860146e-316,6.9379908223486e-310,1.0e-323,0.0,NaN,0.0,1.060998003e-314,6.9379908223486e-310), Dual{Nothing}(1.0e-323,0.0,NaN,0.0,5.22116756e-316,6.9379908223486e-310,1.0e-323,0.0,NaN), Dual{Nothing}(0.0,5.22116756e-316,6.9379908223486e-310,1.0e-323,0.0,NaN,0.0,5.22118654e-316,6.9379908223486e-310), Dual{Nothing}(1.0e-323,0.0,NaN,0.0,5.22116756e-316,6.9379908223486e-310,1.0e-323,0.0,NaN), Dual{Nothing}(0.0,0.0,6.9379908223486e-310,1.0e-323,0.0,NaN,0.0,0.0,6.9379908223486e-310), Dual{Nothing}(1.0e-323,0.0,NaN,0.0,1.0609980126e-314,6.9379908223486e-310,1.0e-323,0.0,NaN), Dual{Nothing}(0.0,1.0609980047e-314,6.9379908223486e-310,1.0e-323,0.0,NaN,0.0,1.0609979563e-314,6.9379908223486e-310), Dual{Nothing}(1.0e-323,0.0,NaN,0.0,5.2211818e-316,6.9379908223486e-310,1.0e-323,0.0,NaN), Dual{Nothing}(0.0,1.0609980096e-314,6.9379908223486e-310,1.0e-323,0.0,NaN,0.0,1.060997962e-314,6.9379908223486e-310)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4 … 1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
+    #states  :   14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,9.76118063844e-313,0.0,0.0,1.20953760085e-312,0.0,0.0,1.464177095766e-312,0.0), Dual{Nothing}(0.0,1.14587772712e-312,0.0,0.0,1.676376674863e-312,0.0,0.0,2.12199579097e-313,0.0), Dual{Nothing}(0.0,1.95223612769e-312,0.0,0.0,2.75859452825e-313,0.0,0.0,2.75859452825e-313,0.0), Dual{Nothing}(0.0,2.97079410735e-313,0.0,0.0,2.31297541215e-312,0.0,0.0,2.97079410735e-313,0.0), Dual{Nothing}(0.0,2.97079410735e-313,0.0,0.0,3.39519326554e-313,0.0,0.0,3.39519326554e-313,0.0), Dual{Nothing}(0.0,3.60739284464e-313,0.0,0.0,3.60739284464e-313,0.0,0.0,3.60739284464e-313,0.0), Dual{Nothing}(0.0,4.031792002834e-312,0.0,0.0,3.60739284464e-313,0.0,0.0,4.625950824304e-312,0.0), Dual{Nothing}(0.0,5.114009856226e-312,0.0,0.0,5.51718905651e-312,0.0,0.0,5.920368256793e-312,0.0), Dual{Nothing}(0.0,4.116671834473e-312,0.0,0.0,3.60739284464e-313,0.0,0.0,3.60739284464e-313,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,8.487983164e-314,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,8.487983164e-314,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4  …  1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
 ⎡⠑⣤⡂⠡⣤⡂⠡⠌⠂⠀⎤
 ⎢⠌⡈⠻⣦⡈⠻⣦⠠⠁⠀⎥
 ⎢⠠⠻⣦⡈⠻⣦⡈⠁⠄⠀⎥
@@ -124,4 +124,4 @@
 ⎢⠠⠻⣦⡈⠳⣄⠀⠀⠀⠀⎥
 ⎢⡁⠆⠈⡛⠆⠈⡓⢄⠀⠀⎥
 ⎣⠈⠀⠁⠀⠀⠁⠀⠀⠑⠄⎦
Info

For the Hessian, only the lower-triangular are being returned.

Deport on CUDA GPU

Deporting all the operations on a CUDA GPU simply amounts to instantiating a FullSpaceEvaluator`](@ref) on the GPU, with

using CUDAKernels # suppose CUDAKernels has been downloaded
-flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

+flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

diff --git a/dev/man/moi_wrapper/index.html b/dev/man/moi_wrapper/index.html index 23bae26..e2c33f6 100644 --- a/dev/man/moi_wrapper/index.html +++ b/dev/man/moi_wrapper/index.html @@ -72,6 +72,6 @@ Number of equality constraint Jacobian evaluations = 16 Number of inequality constraint Jacobian evaluations = 16 Number of Lagrangian Hessian evaluations = 15 -Total seconds in IPOPT = 6.693 +Total seconds in IPOPT = 6.504 -EXIT: Optimal Solution Found. +EXIT: Optimal Solution Found. diff --git a/dev/man/nlpmodel_wrapper/index.html b/dev/man/nlpmodel_wrapper/index.html index 9b3903e..2260519 100644 --- a/dev/man/nlpmodel_wrapper/index.html +++ b/dev/man/nlpmodel_wrapper/index.html @@ -72,4 +72,4 @@ flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

The OPFModel structure works exclusively on the host memory, so we have to bridge the evaluator flp to the host before creating a new instance of OPFModel:

brige = Argos.bridge(flp)
 model = Argos.OPFModel(bridge)
-
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

+
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

diff --git a/dev/man/overview/index.html b/dev/man/overview/index.html index 13448da..876d617 100644 --- a/dev/man/overview/index.html +++ b/dev/man/overview/index.html @@ -27,79 +27,79 @@ Argos.update!(flp, x) # The values in the cache are modified accordingly [stack.vmag stack.vang]
9×2 Matrix{Float64}:
- 0.1659      0.0
- 0.20275     0.463943
- 0.880073    0.966874
- 0.169488    0.621777
- 0.415538    0.687878
- 0.654775    0.398492
- 0.00854804  0.0339881
- 0.558817    0.344786
- 0.0647712   0.476734
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
1780.222997912705

Gradient:

g = zeros(n)
+ 0.962568   0.0
+ 0.668592   0.976681
+ 0.0289271  0.757135
+ 0.711741   0.292609
+ 0.780572   0.0379521
+ 0.0624787  0.725597
+ 0.769791   0.235386
+ 0.455982   0.536132
+ 0.860089   0.903858
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
13460.528303905805

Gradient:

g = zeros(n)
 Argos.gradient!(flp, g, x)
 g
19-element Vector{Float64}:
-    0.0
-    0.0
-   49.81990469439767
-    0.0
-    0.0
-    0.0
-    0.0
-    0.0
-  210.63924855446834
-    0.0
-    0.0
-    0.0
-    0.0
-    0.0
-  215.19451777528795
-    0.0
-    0.0
-  614.872939446702
- 1775.7633612243587

Constraints:

cons = zeros(m)
+     0.0
+     0.0
+ 80265.17006053509
+     0.0
+     0.0
+     0.0
+     0.0
+     0.0
+ 33973.57255815164
+     0.0
+     0.0
+     0.0
+     0.0
+     0.0
+ 25120.69986599717
+     0.0
+     0.0
+  1802.4434973304851
+   974.4766323865786

Constraints:

cons = zeros(m)
 Argos.constraint!(flp, cons, x)
 cons
36-element Vector{Float64}:
- -0.07560564590935401
-  4.609143102788424
-  0.19755169112898222
-  1.4687134977988547
- -5.002985423521609
-  0.9472750313155336
-  0.6019430024637626
-  1.213309754629246
- -0.1254261283969036
-  0.7308215043235755
+  1.0904235463636058
+ -0.3559567214955834
+  0.7720292513378819
+  0.12795603997571225
+  0.30575170129102974
+  0.40570774285271094
+ -1.8918124393642244
+  6.938524549178411
+ -2.73620981933594
+  4.40162164182465
   ⋮
-  0.09123845213102554
-  1.1714338101517208
-  0.9720568282146828
- 28.96059554538682
-  0.0029678583153063807
- 17.820920696013193
-  1.3511164269628986
-  0.0383518615283166
-  0.04287625230689705
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
+ 18.49868397880912
+  2.727852500650898
+  0.06933273803732784
+  0.0012816971003801103
+ 28.761845122075027
+  5.201273416620432
+ 11.835144097966428
+  5.468121680284105
+ 16.80688427537028
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
 c = Argos.constraint(flp, x)
36-element Vector{Float64}:
- -0.07560564590935401
-  4.609143102788424
-  0.19755169112898222
-  1.4687134977988547
- -5.002985423521609
-  0.9472750313155336
-  0.6019430024637626
-  1.213309754629246
- -0.1254261283969036
-  0.7308215043235755
+  1.0904235463636058
+ -0.3559567214955834
+  0.7720292513378819
+  0.12795603997571225
+  0.30575170129102974
+  0.40570774285271094
+ -1.8918124393642244
+  6.938524549178411
+ -2.73620981933594
+  4.40162164182465
   ⋮
-  0.09123845213102554
-  1.1714338101517208
-  0.9720568282146828
- 28.96059554538682
-  0.0029678583153063807
- 17.820920696013193
-  1.3511164269628986
-  0.0383518615283166
-  0.04287625230689705

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
+ 18.49868397880912
+  2.727852500650898
+  0.06933273803732784
+  0.0012816971003801103
+ 28.761845122075027
+  5.201273416620432
+ 11.835144097966428
+  5.468121680284105
+ 16.80688427537028

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.0  0.0
  1.0  0.0
@@ -109,4 +109,4 @@
  1.0  0.0
  1.0  0.0
  1.0  0.0
- 1.0  0.0
+ 1.0 0.0 diff --git a/dev/man/reducedspace/index.html b/dev/man/reducedspace/index.html index 1b1349b..05104a5 100644 --- a/dev/man/reducedspace/index.html +++ b/dev/man/reducedspace/index.html @@ -94,7 +94,7 @@ * #iterations: 4 * Time Jacobian (s) ........: 0.0001 * Time linear solver (s) ...: 0.0001 - * Time total (s) ...........: 0.3937

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
+  * Time total (s) ...........: 0.3729

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.1       0.0478953
@@ -141,4 +141,4 @@
  -1573.41    -760.654    2476.81     -21.0085   -94.5838
    100.337    -60.9243    -21.0085  3922.1     2181.62
    105.971    -11.7018    -94.5838  2181.62    4668.9

As we will explain later, the computation of the reduced Jacobian and reduced Hessian can be streamlined on the GPU.

Deport on CUDA GPU

Instantiating a ReducedSpaceEvaluator on an NVIDIA GPU translates to:

using CUDAKernels # suppose CUDAKernels has been downloaded
-red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

+red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

diff --git a/dev/optim/biegler/index.html b/dev/optim/biegler/index.html index 6b237d8..7cd20cd 100644 --- a/dev/optim/biegler/index.html +++ b/dev/optim/biegler/index.html @@ -95,10 +95,10 @@ Number of constraint evaluations = 16 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 13.895 -Total wall-clock secs in linear solver = 0.073 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 13.404 +Total wall-clock secs in linear solver = 0.067 Total wall-clock secs in NLP function evaluations = 0.001 -Total wall-clock secs = 13.969 +Total wall-clock secs = 13.473 EXIT: Optimal Solution Found (tol = 1.0e-08). -"Execution stats: Optimal Solution Found (tol = 1.0e-08)."
Info

Note that we get the exact same convergence as in the full-space.

+"Execution stats: Optimal Solution Found (tol = 1.0e-08)."
Info

Note that we get the exact same convergence as in the full-space.

diff --git a/dev/optim/fullspace/index.html b/dev/optim/fullspace/index.html index 8966e4a..3367d8c 100644 --- a/dev/optim/fullspace/index.html +++ b/dev/optim/fullspace/index.html @@ -80,10 +80,10 @@ Number of constraint evaluations = 16 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.513 -Total wall-clock secs in linear solver = 0.008 -Total wall-clock secs in NLP function evaluations = 0.006 -Total wall-clock secs = 6.527 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.269 +Total wall-clock secs in linear solver = 0.007 +Total wall-clock secs in NLP function evaluations = 0.005 +Total wall-clock secs = 6.282 EXIT: Optimal Solution Found (tol = 1.0e-08). "Execution stats: Optimal Solution Found (tol = 1.0e-08)."

Querying the solution

MadNLP returns a MadNLPExecutionStats object storing the solution. One can query the optimal objective as:

stats.objective
5296.686202870398

and the optimal solution:

stats.solution
19-element Vector{Float64}:
@@ -161,12 +161,12 @@
 Number of constraint evaluations                     = 7
 Number of constraint Jacobian evaluations            = 6
 Number of Lagrangian Hessian evaluations             = 5
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  0.008
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  0.007
 Total wall-clock secs in linear solver                      =  0.000
 Total wall-clock secs in NLP function evaluations           =  0.002
-Total wall-clock secs                                       =  0.010
+Total wall-clock secs                                       =  0.009
 
 EXIT: Maximum Number of Iterations Exceeded.
 "Execution stats: Maximum Number of Iterations Exceeded."

Most importantly, one may want to use a different sparse linear solver than UMFPACK, employed by default in MadNLP. We recommend using HSL solvers (the installation procedure is detailed here). Once HSL is installed, one can solve the OPF with:

using MadNLPHSL
 solver = MadNLP.MadNLPSolver(model; linear_solver=Ma27Solver)
-MadNLP.solve!(solver)
+MadNLP.solve!(solver) diff --git a/dev/optim/reducedspace/index.html b/dev/optim/reducedspace/index.html index d548393..a1a3ff1 100644 --- a/dev/optim/reducedspace/index.html +++ b/dev/optim/reducedspace/index.html @@ -133,10 +133,10 @@ Number of constraint evaluations = 29 Number of constraint Jacobian evaluations = 26 Number of Lagrangian Hessian evaluations = 25 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.280 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.143 Total wall-clock secs in linear solver = 0.000 -Total wall-clock secs in NLP function evaluations = 0.398 -Total wall-clock secs = 3.678 +Total wall-clock secs in NLP function evaluations = 0.379 +Total wall-clock secs = 3.522 EXIT: Optimal Solution Found (tol = 1.0e-06). "Execution stats: Optimal Solution Found (tol = 1.0e-06)."
Info

We recommend changing the default tolerance to be above the tolerance of the Newton-Raphson used inside ReducedSpaceEvaluator. Indeed, the power flow is solved only approximately, leading to slightly inaccurate evaluations and derivatives, impacting the convergence of the interior-point algorithm. In general, we recommend setting tol=1e-5.

Info

Here, we are using Lapack on the CPU to solve the condensed KKT system at each iteration of the interior-point algorithm. However, if an NVIDIA GPU is available, we recommend using a CUDA-accelerated Lapack version, more efficient than the default Lapack. If MadNLPGPU is installed, this amounts to

using MadNLPGPU
@@ -161,4 +161,4 @@
  1.1       0.0105224
  1.08949  -0.0208788
  1.1       0.0158063
- 1.07176  -0.0805509
+ 1.07176 -0.0805509 diff --git a/dev/quickstart/cpu/index.html b/dev/quickstart/cpu/index.html index e65962c..5d31173 100644 --- a/dev/quickstart/cpu/index.html +++ b/dev/quickstart/cpu/index.html @@ -52,10 +52,10 @@ Number of constraint evaluations = 20 Number of constraint Jacobian evaluations = 20 Number of Lagrangian Hessian evaluations = 19 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.271 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.086 Total wall-clock secs in linear solver = 0.016 -Total wall-clock secs in NLP function evaluations = 3.163 -Total wall-clock secs = 9.450 +Total wall-clock secs in NLP function evaluations = 3.118 +Total wall-clock secs = 9.220 EXIT: Optimal Solution Found (tol = 1.0e-08).

Dommel & Tinney's method (reduce-then-linearize)

Tip
julia> Argos.run_opf(datafile, Argos.DommelTinney(); tol=1e-5);This is MadNLP version v0.8.4, running with Lapack-CPU (CHOLESKY)
 
@@ -107,9 +107,9 @@
 Number of constraint evaluations                     = 18
 Number of constraint Jacobian evaluations            = 18
 Number of Lagrangian Hessian evaluations             = 17
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  6.205
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  6.122
 Total wall-clock secs in linear solver                      =  0.009
 Total wall-clock secs in NLP function evaluations           =  0.077
-Total wall-clock secs                                       =  6.291
+Total wall-clock secs                                       =  6.207
 
-EXIT: Optimal Solution Found (tol = 1.0e-05).
+EXIT: Optimal Solution Found (tol = 1.0e-05). diff --git a/dev/quickstart/cuda/index.html b/dev/quickstart/cuda/index.html index 47cbd1f..92e9e4b 100644 --- a/dev/quickstart/cuda/index.html +++ b/dev/quickstart/cuda/index.html @@ -6,4 +6,4 @@

Full-space method

ArgosCUDA.run_opf_gpu(datafile, Argos.FullSpace())
 

Biegler's method (linearize-then-reduce)

ArgosCUDA.run_opf_gpu(datafile, Argos.BieglerReduction(); linear_solver=LapackGPUSolver)
 

Dommel & Tinney's method (reduce-then-linearize)

ArgosCUDA.run_opf_gpu(datafile, Argos.DommelTinney(); linear_solver=LapackGPUSolver)
-
+ diff --git a/dev/references/index.html b/dev/references/index.html index e51f5bc..acb6ca3 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.

+References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.