Skip to content

Commit

Permalink
Add support for generic number type (#3385)
Browse files Browse the repository at this point in the history
* Add support for generic number type

Co-authored-by: Benoît Legat <[email protected]>

* Add tutorial for arbitrary precision

---------

Co-authored-by: Benoît Legat <[email protected]>
  • Loading branch information
odow and blegat authored Jul 23, 2023
1 parent f220e9b commit d0be19b
Show file tree
Hide file tree
Showing 10 changed files with 364 additions and 69 deletions.
2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
[deps]
CDDLib = "3391f64e-dcde-5f30-b752-e11513730f60"
CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
Clarabel = "61c947e1-3e6d-4ee4-985a-eec8c727bd6e"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Expand Down Expand Up @@ -29,6 +30,7 @@ Tables = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[compat]
CDDLib = "=0.9.2"
CSV = "0.10"
Clarabel = "=0.5.1"
DataFrames = "1"
Expand Down
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -310,6 +310,7 @@ const _PAGES = [
"tutorials/conic/tips_and_tricks.md",
"tutorials/conic/simple_examples.md",
"tutorials/conic/dualization.md",
"tutorials/conic/arbitrary_precision.md",
"tutorials/conic/logistic_regression.md",
"tutorials/conic/experiment_design.md",
"tutorials/conic/min_ellipse.md",
Expand Down
28 changes: 28 additions & 0 deletions docs/src/manual/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,34 @@ false
julia> model = Model(solver);
```

## Changing the number types

By default, the coefficients of affine and quadratic expressions are numbers
of type either `Float64` or `Complex{Float64}` (see [Complex number support](@ref)).

The type `Float64` can be changed using the [`GenericModel`](@ref) constructor:

```jldoctest
julia> model = GenericModel{Rational{BigInt}}();
julia> @variable(model, x)
x
julia> @expression(model, expr, 1 // 3 * x)
1//3 x
julia> typeof(expr)
GenericAffExpr{Rational{BigInt}, GenericVariableRef{Rational{BigInt}}}
```

Using a `value_type` other than `Float64` is an advanced operation and should be
used only if the underlying solver actually solves the problem using the
provided value type.

!!! warning
[Nonlinear Modeling](@ref) is currently restricted to the `Float64` number
type.

## Print the model

By default, `show(model)` will print a summary of the problem:
Expand Down
156 changes: 156 additions & 0 deletions docs/src/tutorials/conic/arbitrary_precision.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
# Copyright 2017, Iain Dunning, Joey Huchette, Miles Lubin, and contributors #src
# This Source Code Form is subject to the terms of the Mozilla Public License #src
# v.2.0. If a copy of the MPL was not distributed with this file, You can #src
# obtain one at https://mozilla.org/MPL/2.0/. #src

# # Arbitrary precision arithmetic

# The purpose of this tutorial is to explain how to use a solver which supports
# arithmetic using a number type other than `Float64`.

# This tutorial uses the following packages:

using JuMP
import CDDLib
import Clarabel

# ## Higher-precision arithmetic

# To create a model with a number type other than `Float64`, use [`GenericModel`](@ref)
# with an optimizer which supports the same number type:

model = GenericModel{BigFloat}(Clarabel.Optimizer{BigFloat})

# The syntax for adding decision variables is the same as a normal JuMP
# model, except that values are converted to `BigFloat`:

@variable(model, -1 <= x[1:2] <= sqrt(big"2"))

# Note that each `x` is now a [`GenericVariableRef{BigFloat}`](@ref), which
# means that the value of `x` in a solution will be a `BigFloat`.

# The lower and upper bounds of the decision variables are also `BigFloat`:

lower_bound(x[1])

#-

typeof(lower_bound(x[1]))

#-

upper_bound(x[2])

#-

typeof(upper_bound(x[2]))

# The syntax for adding constraints is the same as a normal JuMP model, except
# that coefficients are converted to `BigFloat`:

@constraint(model, c, x[1] == big"2" * x[2])

# The function is a [`GenericAffExpr`](@ref) with `BigFloat` for the
# coefficient and variable types;

constraint = constraint_object(c)
typeof(constraint.func)

# and the set is a [`MOI.EqualTo{BigFloat}`](@ref):

typeof(constraint.set)

# The syntax for adding and objective is the same as a normal JuMP model, except
# that coefficients are converted to `BigFloat`:

@objective(model, Min, 3x[1]^2 + 2x[2]^2 - x[1] - big"4" * x[2])

#-

typeof(objective_function(model))

# Here's the model we have built:

print(model)

# Let's solve and inspect the solution:

optimize!(model)
solution_summary(model)

# The value of each decision variable is a `BigFloat`:

value.(x)

# as well as other solution attributes like the objective value:

objective_value(model)

# and dual solution:

dual(c)

# This problem has an analytic solution of `x = [3//7, 3//14]`. Currently, our
# solution has an error of approximately `1e-9`:

value.(x) .- [3 // 7, 3 // 14]

# But by reducing the tolerances, we can obtain a more accurate solution:

set_attribute(model, "tol_gap_abs", 1e-32)
set_attribute(model, "tol_gap_rel", 1e-32)
optimize!(model)
value.(x) .- [3 // 7, 3 // 14]

# ## Rational arithmetic

# In addition to higher-precision floating point number types like `BigFloat`,
# JuMP also supports solvers with exact rational arithmetic. One example is
# CDDLib.jl, which supports the `Rational{BigInt}` number type:

model = GenericModel{Rational{BigInt}}(CDDLib.Optimizer{Rational{BigInt}})

# As before, we can create variables using rational bounds:

@variable(model, 1 // 7 <= x[1:2] <= 2 // 3)

#-

lower_bound(x[1])

#-

typeof(lower_bound(x[1]))

# As well as constraints:

@constraint(model, c1, (2 // 1) * x[1] + x[2] <= 1)

#-

@constraint(model, c2, x[1] + 3x[2] <= 9 // 4)

# and objective functions:

@objective(model, Max, sum(x))

# Here's the model we have built:

print(model)

# Let's solve and inspect the solution:

optimize!(model)
solution_summary(model)

# The optimal values are given in exact rational arithmetic:

value.(x)

#-

objective_value(model)

#-

value(c2)
13 changes: 8 additions & 5 deletions src/constraints.jl
Original file line number Diff line number Diff line change
Expand Up @@ -582,8 +582,10 @@ representing the function and the `set` field contains the MOI set.
See also the [documentation](@ref Constraints) on JuMP's representation of
constraints for more background.
"""
struct ScalarConstraint{F<:AbstractJuMPScalar,S<:MOI.AbstractScalarSet} <:
AbstractConstraint
struct ScalarConstraint{
F<:Union{Number,AbstractJuMPScalar},
S<:MOI.AbstractScalarSet,
} <: AbstractConstraint
func::F
set::S
end
Expand Down Expand Up @@ -617,7 +619,7 @@ See also the [documentation](@ref Constraints) on JuMP's representation of
constraints.
"""
struct VectorConstraint{
F<:AbstractJuMPScalar,
F<:Union{Number,AbstractJuMPScalar},
S<:MOI.AbstractVectorSet,
Shape<:AbstractShape,
} <: AbstractConstraint
Expand All @@ -626,7 +628,7 @@ struct VectorConstraint{
shape::Shape
end
function VectorConstraint(
func::Vector{<:AbstractJuMPScalar},
func::Vector{<:Union{Number,AbstractJuMPScalar}},
set::MOI.AbstractVectorSet,
)
if length(func) != MOI.dimension(set)
Expand All @@ -641,7 +643,7 @@ function VectorConstraint(
end

function VectorConstraint(
func::AbstractVector{<:AbstractJuMPScalar},
func::AbstractVector{<:Union{Number,AbstractJuMPScalar}},
set::MOI.AbstractVectorSet,
)
# collect() is not used here so that DenseAxisArray will work
Expand Down Expand Up @@ -696,6 +698,7 @@ function add_constraint(
con::AbstractConstraint,
name::String = "",
)
con = model_convert(model, con)
# The type of backend(model) is unknown so we directly redirect to another
# function.
check_belongs_to_model(con, model)
Expand Down
Loading

0 comments on commit d0be19b

Please sign in to comment.