diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 4cd06cc..be63727 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-06T09:15:46","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-10T01:37:28","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index e15d270..1eabadc 100644 --- a/dev/index.html +++ b/dev/index.html @@ -5,4 +5,4 @@ author={Pacaud, Fran{\c{c}}ois and Shin, Sungho and Schanen, Michel and Maldonado, Daniel Adrian and Anitescu, Mihai}, journal={arXiv preprint arXiv:2203.11875}, year={2022} -}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

+}

Funding

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

Table of contents

Quickstart

OPF Model

OPF Solution

Wrappers

Library

diff --git a/dev/lib/api/index.html b/dev/lib/api/index.html index e069a7b..1825cdb 100644 --- a/dev/lib/api/index.html +++ b/dev/lib/api/index.html @@ -2,4 +2,4 @@ Evaluators API · Argos.jl

Evaluator API

Description

Argos.AbstractNLPEvaluatorType
AbstractNLPEvaluator

AbstractNLPEvaluator implements the bridge between the problem formulation (see ExaPF.AbstractFormulation) and the optimization solver. Once the problem formulation bridged, the evaluator allows to evaluate:

  • the objective;
  • the gradient of the objective;
  • the constraints;
  • the Jacobian of the constraints;
  • the Jacobian-vector and transpose-Jacobian vector products of the constraints;
  • the Hessian of the objective;
  • the Hessian of the Lagrangian.
source

API Reference

Optimization

Argos.optimize!Function
optimize!(optimizer, nlp::AbstractNLPEvaluator, x0)

Use optimization routine implemented in optimizer to optimize the optimal power flow problem specified in the evaluator nlp. Initial point is specified by x0.

Return the solution as a named tuple, with fields

  • status::MOI.TerminationStatus: Solver's termination status, as specified by MOI
  • minimum::Float64: final objective
  • minimizer::AbstractVector: final solution vector, with same ordering as the Variables specified in nlp.
optimize!(optimizer, nlp::AbstractNLPEvaluator)

Wrap previous optimize! function and pass as initial guess x0 the initial value specified when calling initial(nlp).

Examples

nlp = ExaPF.ReducedSpaceEvaluator(datafile)
 optimizer = Ipopt.Optimizer()
 solution = ExaPF.optimize!(optimizer, nlp)
-

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
+

Notes

By default, the optimization routine solves a minimization problem.

source

Attributes

Argos.VariablesType
Variables <: AbstractNLPAttribute end

Attribute corresponding to the optimization variables attached to a given AbstractNLPEvaluator.

source
Argos.ConstraintsType
Constraints <: AbstractNLPAttribute end

Attribute corresponding to the constraints attached to a given AbstractNLPEvaluator.

source
Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

source
Argos.n_constraintsFunction
n_constraints(nlp::AbstractNLPEvaluator)

Get the number of constraints in the problem.

source
Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

source

Callbacks

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.

source
Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

source
Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

source
Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

source
Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

source
Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

source
Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
source
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
source
Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
source
Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

source
Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

source
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

source
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
source
Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
source

Utilities

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

source
diff --git a/dev/lib/evaluators/index.html b/dev/lib/evaluators/index.html index 5c0f23c..4a795dd 100644 --- a/dev/lib/evaluators/index.html +++ b/dev/lib/evaluators/index.html @@ -90,4 +90,4 @@ julia> @assert isa(x, Array) # x is defined on the host memory julia> Argos.objective(bdg, x) # evaluate the objective on the device -source +source diff --git a/dev/lib/kkt/index.html b/dev/lib/kkt/index.html index 6fe4782..44677a2 100644 --- a/dev/lib/kkt/index.html +++ b/dev/lib/kkt/index.html @@ -23,4 +23,4 @@ julia> kkt = Argos.MixedAuglagKKTSystem{T, VT, MT}(opf) julia> MadNLP.get_kkt(kkt) # return the matrix to factorize -

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source +

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

source diff --git a/dev/lib/wrappers/index.html b/dev/lib/wrappers/index.html index 2e68c6e..b277d81 100644 --- a/dev/lib/wrappers/index.html +++ b/dev/lib/wrappers/index.html @@ -9,4 +9,4 @@ julia> nlp = Argos.ReducedSpaceEvaluator(datafile); julia> ev = Argos.MOIEvaluator(nlp) -

Attributes

source +

Attributes

source diff --git a/dev/man/fullspace/index.html b/dev/man/fullspace/index.html index 89aa0e5..3afeaae 100644 --- a/dev/man/fullspace/index.html +++ b/dev/man/fullspace/index.html @@ -83,7 +83,7 @@ #lines: 9 giving a mathematical formulation with: #controls: 5 - #states : 14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) … Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,5.8668939628608e-310,0.0,0.0,1.0864618449742e-310,0.0,0.0,1.24943112172035e-310,0.0), Dual{Nothing}(0.0,1.4124003984665e-310,0.0,0.0,1.5753696752126e-310,0.0,0.0,5.70392468611465e-310,0.0), Dual{Nothing}(0.0,1.9013082287049e-310,0.0,0.0,2.5531853356894e-310,0.0,0.0,2.22724678219715e-310,0.0), Dual{Nothing}(0.0,2.66183152018684e-310,0.0,0.0,2.5531853356894e-310,0.0,0.0,2.71615461243555e-310,0.0), Dual{Nothing}(0.0,3.3680317194201e-310,0.0,0.0,3.0420931659278e-310,0.0,0.0,3.4766779039175e-310,0.0), Dual{Nothing}(0.0,3.3680317194201e-310,0.0,0.0,3.5310009961662e-310,0.0,0.0,4.18287810315074e-310,0.0), Dual{Nothing}(0.0,3.8569395496585e-310,0.0,0.0,4.29152428764817e-310,0.0,0.0,4.18287810315074e-310,0.0), Dual{Nothing}(0.0,4.3458473798969e-310,0.0,0.0,4.9977244868814e-310,0.0,0.0,4.67178593338914e-310,0.0), Dual{Nothing}(0.0,5.10637067137883e-310,0.0,0.0,4.9977244868814e-310,0.0,0.0,5.16069376362754e-310,0.0), Dual{Nothing}(0.0,5.32366304037368e-310,0.0,0.0,5.4866323171198e-310,0.0,0.0,5.64960159386594e-310,0.0) … Dual{Nothing}(0.0,1.412400398466485e-309,0.0,0.0,1.4286973261411e-309,0.0,0.0,1.44499425381571e-309,0.0), Dual{Nothing}(0.0,1.461291181490325e-309,0.0,0.0,1.47758810916494e-309,0.0,0.0,1.49388503683955e-309,0.0), Dual{Nothing}(0.0,1.510181964514165e-309,0.0,0.0,1.52647889218878e-309,0.0,0.0,1.54277581986339e-309,0.0), Dual{Nothing}(0.0,1.559072747538005e-309,0.0,0.0,1.57536967521262e-309,0.0,0.0,1.59166660288723e-309,0.0), Dual{Nothing}(0.0,1.607963530561845e-309,0.0,0.0,1.62426045823646e-309,0.0,0.0,1.64055738591107e-309,0.0), Dual{Nothing}(0.0,1.656854313585685e-309,0.0,0.0,1.6731512412603e-309,0.0,0.0,1.68944816893491e-309,0.0), Dual{Nothing}(0.0,1.705745096609524e-309,0.0,0.0,1.72204202428414e-309,0.0,0.0,1.73833895195875e-309,0.0), Dual{Nothing}(0.0,1.754635879633364e-309,0.0,0.0,1.77093280730798e-309,0.0,0.0,1.78722973498259e-309,0.0), Dual{Nothing}(0.0,1.803526662657204e-309,0.0,0.0,1.81982359033182e-309,0.0,0.0,1.83612051800643e-309,0.0), Dual{Nothing}(0.0,1.852417445681044e-309,0.0,0.0,1.868714373355657e-309,0.0,0.0,1.88501130103027e-309,0.0)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4 … 1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
+    #states  :   14, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}(PolarBasis (AbstractExpression), ExaPF.MultiExpressions(ExaPF.AutoDiff.AbstractExpression[CostFunction (AbstractExpression), PowerFlowBalance (AbstractExpression), PowerGenerationBounds (AbstractExpression), LineFlows (AbstractExpression)])), [11, 12, 13, 14, 15, 16, 17, 18, 4, 5, 6, 7, 8, 9, 1, 2, 3, 20, 21], 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, 21-elements NetworkStack{Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}, [1, 1, 1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 5, 6, 7, 8, 8, 1, 1], 8, ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], ForwardDiff.Dual{Nothing, Float64, 8}[Dual{Nothing}(0.0,9.234925682281e-311,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,5.70392468611465e-310,0.0), Dual{Nothing}(0.0,1.9013082287049e-310,0.0,0.0,2.5531853356894e-310,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,2.66183152018684e-310,0.0,0.0,2.5531853356894e-310,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,3.3680317194201e-310,0.0,0.0,3.0420931659278e-310,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,4.3458473798969e-310,0.0,0.0,4.9977244868814e-310,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,4.9977244868814e-310,0.0,0.0,5.16069376362754e-310,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,5.64960159386594e-310,0.0)  …  Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,1.461291181490325e-309,0.0,0.0,0.0,0.0,0.0,1.49388503683955e-309,0.0), Dual{Nothing}(0.0,2.121995791e-314,0.0,0.0,0.0,0.0,0.0,1.54277581986339e-309,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,1.57536967521262e-309,0.0,0.0,1.59166660288723e-309,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.64055738591107e-309,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,1.705745096609524e-309,0.0,0.0,0.0,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,1.754635879633364e-309,0.0,0.0,2.121995791e-314,0.0,0.0,0.0,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.83612051800643e-309,0.0), Dual{Nothing}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)], sparse([1, 7, 13, 16, 2, 5, 11, 17, 3, 4  …  1, 7, 13, 16, 2, 5, 11, 17, 18, 19], [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 19], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0  …  0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 19, 19))

generating the matrix

flp.hess.H
19×19 SparseArrays.SparseMatrixCSC{Float64, Int64} with 103 stored entries:
 ⎡⠑⣤⡂⠡⣤⡂⠡⠌⠂⠀⎤
 ⎢⠌⡈⠻⣦⡈⠻⣦⠠⠁⠀⎥
 ⎢⠠⠻⣦⡈⠻⣦⡈⠁⠄⠀⎥
@@ -124,4 +124,4 @@
 ⎢⠠⠻⣦⡈⠳⣄⠀⠀⠀⠀⎥
 ⎢⡁⠆⠈⡛⠆⠈⡓⢄⠀⠀⎥
 ⎣⠈⠀⠁⠀⠀⠁⠀⠀⠑⠄⎦
Info

For the Hessian, only the lower-triangular are being returned.

Deport on CUDA GPU

Deporting all the operations on a CUDA GPU simply amounts to instantiating a FullSpaceEvaluator`](@ref) on the GPU, with

using CUDAKernels # suppose CUDAKernels has been downloaded
-flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

+flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

Then, the API remains exactly the same as on the CPU.

When using device=CUDADevice(), the model is entirely instantiated on the device, without data left on the host (hence minimizing the communication costs). The computation of the derivatives is streamlined by propagating the tangents in parallel, leading to faster evaluations of the callbacks. As expected, the larger the model, the more significant the performance gain.

diff --git a/dev/man/moi_wrapper/index.html b/dev/man/moi_wrapper/index.html index d5b5dbf..84506e4 100644 --- a/dev/man/moi_wrapper/index.html +++ b/dev/man/moi_wrapper/index.html @@ -72,6 +72,6 @@ Number of equality constraint Jacobian evaluations = 16 Number of inequality constraint Jacobian evaluations = 16 Number of Lagrangian Hessian evaluations = 15 -Total seconds in IPOPT = 6.484 +Total seconds in IPOPT = 6.542 -EXIT: Optimal Solution Found. +EXIT: Optimal Solution Found. diff --git a/dev/man/nlpmodel_wrapper/index.html b/dev/man/nlpmodel_wrapper/index.html index bf8385b..af00dea 100644 --- a/dev/man/nlpmodel_wrapper/index.html +++ b/dev/man/nlpmodel_wrapper/index.html @@ -72,4 +72,4 @@ flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())

The OPFModel structure works exclusively on the host memory, so we have to bridge the evaluator flp to the host before creating a new instance of OPFModel:

brige = Argos.bridge(flp)
 model = Argos.OPFModel(bridge)
-
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

+
Note

Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.

diff --git a/dev/man/overview/index.html b/dev/man/overview/index.html index 4473e05..ecf3ba1 100644 --- a/dev/man/overview/index.html +++ b/dev/man/overview/index.html @@ -27,79 +27,79 @@ Argos.update!(flp, x) # The values in the cache are modified accordingly [stack.vmag stack.vang]
9×2 Matrix{Float64}:
- 0.930048   0.0
- 0.105224   0.721234
- 0.0429933  0.0385092
- 0.554715   0.596173
- 0.560776   0.0319865
- 0.714164   0.707055
- 0.508472   0.190153
- 0.932195   0.312357
- 0.533807   0.758591
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
28104.48239024612

Gradient:

g = zeros(n)
+ 0.125724   0.0
+ 0.587203   0.774934
+ 0.90211    0.958682
+ 0.975033   0.335833
+ 0.0361907  0.208528
+ 0.679669   0.836171
+ 0.599291   0.751686
+ 0.780641   0.14568
+ 0.626903   0.117694
Note

Every time we have a new variable x, it is important to refresh the cache by calling explicitly Argos.update!(flp, x) before calling the other callbacks.

Callbacks

Now the cache has been refreshed by calling update!, one can query the different callbacks to evaluate the objective, the constraints and the derivatives:

Objective:

obj = Argos.objective(flp, x)
2171.26444994127

Gradient:

g = zeros(n)
 Argos.gradient!(flp, g, x)
 g
19-element Vector{Float64}:
-     0.0
-     0.0
- 78296.38160782671
-     0.0
-     0.0
-     0.0
-     0.0
-     0.0
- 95773.07273346202
-     0.0
-     0.0
-     0.0
-     0.0
-     0.0
- 57122.57326713006
-     0.0
-     0.0
-  1396.4340660630908
-  2369.7998967860517

Constraints:

cons = zeros(m)
+    0.0
+    0.0
+ 2095.734214352677
+    0.0
+    0.0
+    0.0
+    0.0
+    0.0
+  750.2598485160831
+    0.0
+    0.0
+    0.0
+    0.0
+    0.0
+ 5818.525608570021
+    0.0
+    0.0
+ 1194.3861100147165
+ 1537.6786117174781

Constraints:

cons = zeros(m)
 Argos.constraint!(flp, cons, x)
 cons
36-element Vector{Float64}:
- -0.12687050462245875
- -1.2512249041677692
-  6.330017070356291
- -2.144407708146333
-  3.956796462221501
- -1.9560200186844974
-  0.027689643402546427
-  2.8842180637105077
- -1.7274057730236834
-  1.4154197672664801
-  ⋮
- 29.574321373091593
-  3.422920421080271
-  3.205407512154772
- 68.86959774496944
-  3.462698837462553
- 30.160962614930448
-  1.9842740190784203
-  2.8185768706097756
-  0.33323429136932203
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
+   3.684556232373139
+   0.6918298742550475
+   4.545098080676751
+   0.6829091509403394
+  -0.23013713479644893
+   4.218817841945995
+  -7.378427441377086
+  -0.7592543773054352
+  27.87913785982699
+  -0.13118367087695687
+   ⋮
+ 210.61757851639243
+   0.1320503472123675
+   6.031399379752205
+   7.892669265892894
+   0.35925701095506213
+  22.8130246631155
+  18.80285179354449
+   0.4211011279705278
+  18.83964759381388
Note

All the callbacks are written to modify the data (constraints, gradient) inplace, to avoid unneeded allocations. In addition, Argos.jl provides a version allocating automatically the return values:

g = Argos.gradient(flp, x)
 c = Argos.constraint(flp, x)
36-element Vector{Float64}:
- -0.12687050462245875
- -1.2512249041677692
-  6.330017070356291
- -2.144407708146333
-  3.956796462221501
- -1.9560200186844974
-  0.027689643402546427
-  2.8842180637105077
- -1.7274057730236834
-  1.4154197672664801
-  ⋮
- 29.574321373091593
-  3.422920421080271
-  3.205407512154772
- 68.86959774496944
-  3.462698837462553
- 30.160962614930448
-  1.9842740190784203
-  2.8185768706097756
-  0.33323429136932203

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
+   3.684556232373139
+   0.6918298742550475
+   4.545098080676751
+   0.6829091509403394
+  -0.23013713479644893
+   4.218817841945995
+  -7.378427441377086
+  -0.7592543773054352
+  27.87913785982699
+  -0.13118367087695687
+   ⋮
+ 210.61757851639243
+   0.1320503472123675
+   6.031399379752205
+   7.892669265892894
+   0.35925701095506213
+  22.8130246631155
+  18.80285179354449
+   0.4211011279705278
+  18.83964759381388

Eventually, one can reset the evaluator to its original state by using reset!:

Argos.reset!(flp)
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.0  0.0
  1.0  0.0
@@ -109,4 +109,4 @@
  1.0  0.0
  1.0  0.0
  1.0  0.0
- 1.0  0.0
+ 1.0 0.0 diff --git a/dev/man/reducedspace/index.html b/dev/man/reducedspace/index.html index 73312cb..1c27791 100644 --- a/dev/man/reducedspace/index.html +++ b/dev/man/reducedspace/index.html @@ -94,7 +94,7 @@ * #iterations: 4 * Time Jacobian (s) ........: 0.0001 * Time linear solver (s) ...: 0.0001 - * Time total (s) ...........: 0.3845

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
+  * Time total (s) ...........: 0.3755

with a slightly different solution (as we have loosened the tolerance):

stack = red.stack
 [stack.vmag stack.vang]
9×2 Matrix{Float64}:
  1.1       0.0
  1.1       0.0478953
@@ -141,4 +141,4 @@
  -1573.41    -760.654    2476.81     -21.0085   -94.5838
    100.337    -60.9243    -21.0085  3922.1     2181.62
    105.971    -11.7018    -94.5838  2181.62    4668.9

As we will explain later, the computation of the reduced Jacobian and reduced Hessian can be streamlined on the GPU.

Deport on CUDA GPU

Instantiating a ReducedSpaceEvaluator on an NVIDIA GPU translates to:

using CUDAKernels # suppose CUDAKernels has been downloaded
-red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

+red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)

The number of batches nbatch_hessian is the number of right-hand sides used to streamline the solution of the linear systems.

diff --git a/dev/optim/biegler/index.html b/dev/optim/biegler/index.html index a18a263..bb400c2 100644 --- a/dev/optim/biegler/index.html +++ b/dev/optim/biegler/index.html @@ -91,10 +91,10 @@ Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 9.376 -Total wall-clock secs in linear solver = 0.023 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 9.266 +Total wall-clock secs in linear solver = 0.022 Total wall-clock secs in NLP function evaluations = 0.001 -Total wall-clock secs = 9.400 +Total wall-clock secs = 9.290 EXIT: Optimal Solution Found. -"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

+"Execution stats: Optimal Solution Found."
Info

Note that we get the exact same convergence as in the full-space.

diff --git a/dev/optim/fullspace/index.html b/dev/optim/fullspace/index.html index 9eccfbb..b378776 100644 --- a/dev/optim/fullspace/index.html +++ b/dev/optim/fullspace/index.html @@ -80,10 +80,10 @@ Number of constraint evaluations = 17 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.728 -Total wall-clock secs in linear solver = 0.434 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.689 +Total wall-clock secs in linear solver = 0.404 Total wall-clock secs in NLP function evaluations = 0.001 -Total wall-clock secs = 3.163 +Total wall-clock secs = 3.094 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."

Querying the solution

MadNLP returns a MadNLPExecutionStats object storing the solution. One can query the optimal objective as:

stats.objective
5296.6862028704

and the optimal solution:

stats.solution
41-element Vector{Float64}:
@@ -170,4 +170,4 @@
 EXIT: Maximum Number of Iterations Exceeded.
 "Execution stats: Maximum Number of Iterations Exceeded."

Most importantly, one may want to use a different sparse linear solver than UMFPACK, employed by default in MadNLP. We recommend using HSL solvers (the installation procedure is detailed here). Once HSL is installed, one can solve the OPF with:

using MadNLPHSL
 solver = MadNLP.MadNLPSolver(model; linear_solver=Ma27Solver)
-MadNLP.solve!(solver)
+MadNLP.solve!(solver) diff --git a/dev/optim/reducedspace/index.html b/dev/optim/reducedspace/index.html index 48bceb5..7a2a8ca 100644 --- a/dev/optim/reducedspace/index.html +++ b/dev/optim/reducedspace/index.html @@ -141,10 +141,10 @@ Number of constraint evaluations = 24 Number of constraint Jacobian evaluations = 23 Number of Lagrangian Hessian evaluations = 22 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.046 -Total wall-clock secs in linear solver = 0.007 -Total wall-clock secs in NLP function evaluations = 0.390 -Total wall-clock secs = 6.444 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 5.929 +Total wall-clock secs in linear solver = 0.008 +Total wall-clock secs in NLP function evaluations = 0.398 +Total wall-clock secs = 6.334 EXIT: Optimal Solution Found. "Execution stats: Optimal Solution Found."
Info

We recommend changing the default tolerance to be above the tolerance of the Newton-Raphson used inside ReducedSpaceEvaluator. Indeed, the power flow is solved only approximately, leading to slightly inaccurate evaluations and derivatives, impacting the convergence of the interior-point algorithm. In general, we recommend setting tol=1e-5.

Info

Here, we are using Lapack on the CPU to solve the condensed KKT system at each iteration of the interior-point algorithm. However, if an NVIDIA GPU is available, we recommend using a CUDA-accelerated Lapack version, more efficient than the default Lapack. If MadNLPGPU is installed, this amounts to

using MadNLPGPU
@@ -184,4 +184,4 @@
  1.1       0.0105224
  1.08949  -0.0208788
  1.1       0.0158063
- 1.07176  -0.0805509
+ 1.07176 -0.0805509 diff --git a/dev/quickstart/cpu/index.html b/dev/quickstart/cpu/index.html index ae77f48..563a026 100644 --- a/dev/quickstart/cpu/index.html +++ b/dev/quickstart/cpu/index.html @@ -52,10 +52,10 @@ Number of constraint evaluations = 21 Number of constraint Jacobian evaluations = 20 Number of Lagrangian Hessian evaluations = 19 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.593 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 2.589 Total wall-clock secs in linear solver = 0.044 -Total wall-clock secs in NLP function evaluations = 4.190 -Total wall-clock secs = 6.827 +Total wall-clock secs in NLP function evaluations = 4.165 +Total wall-clock secs = 6.797 EXIT: Optimal Solution Found.

Biegler's method (linearize-then-reduce)

Tip
julia> Argos.run_opf(datafile, Argos.BieglerReduction(); lapack_algorithm=MadNLP.CHOLESKY);This is MadNLP version v0.7.0, running with Lapack-CPU (CHOLESKY)
 
@@ -109,10 +109,10 @@
 Number of constraint evaluations                     = 21
 Number of constraint Jacobian evaluations            = 20
 Number of Lagrangian Hessian evaluations             = 19
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  3.289
-Total wall-clock secs in linear solver                      =  0.017
-Total wall-clock secs in NLP function evaluations           =  0.013
-Total wall-clock secs                                       =  3.319
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  3.171
+Total wall-clock secs in linear solver                      =  0.014
+Total wall-clock secs in NLP function evaluations           =  0.012
+Total wall-clock secs                                       =  3.198
 
 EXIT: Optimal Solution Found.

Dommel & Tinney's method (reduce-then-linearize)

Tip
julia> Argos.run_opf(datafile, Argos.DommelTinney(); tol=1e-5);This is MadNLP version v0.7.0, running with Lapack-CPU (CHOLESKY)
 
@@ -164,9 +164,9 @@
 Number of constraint evaluations                     = 19
 Number of constraint Jacobian evaluations            = 18
 Number of Lagrangian Hessian evaluations             = 17
-Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  4.496
+Total wall-clock secs in solver (w/o fun. eval./lin. alg.)  =  4.395
 Total wall-clock secs in linear solver                      =  0.008
-Total wall-clock secs in NLP function evaluations           =  1.204
-Total wall-clock secs                                       =  5.707
+Total wall-clock secs in NLP function evaluations           =  1.237
+Total wall-clock secs                                       =  5.640
 
-EXIT: Optimal Solution Found.
+EXIT: Optimal Solution Found. diff --git a/dev/quickstart/cuda/index.html b/dev/quickstart/cuda/index.html index 3f8639b..bde3854 100644 --- a/dev/quickstart/cuda/index.html +++ b/dev/quickstart/cuda/index.html @@ -6,4 +6,4 @@

Full-space method

ArgosCUDA.run_opf_gpu(datafile, Argos.FullSpace())
 

Biegler's method (linearize-then-reduce)

ArgosCUDA.run_opf_gpu(datafile, Argos.BieglerReduction(); linear_solver=LapackGPUSolver)
 

Dommel & Tinney's method (reduce-then-linearize)

ArgosCUDA.run_opf_gpu(datafile, Argos.DommelTinney(); linear_solver=LapackGPUSolver)
-
+ diff --git a/dev/references/index.html b/dev/references/index.html index fc3c303..f2f3b36 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.

+References · Argos.jl

References

Argos has lead to several publications in peer-reviewed journals.

PP2022 details how Argos is evaluating the second-order reduced derivatives in parallel on the GPU. All results were generated with this artifact.

PSCC2022 uses the augmented Lagrangian algorithm implemented in Argos to solve static and real-time OPF. All results were generated with this artifact.

ARXIV2022 demonstrates the full capabilities of Argos to solve large-scale OPF on CUDA GPU, both in the full-space and in the reduced-space.