Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch testing to use HiGHS.jl #133

Merged
merged 1 commit into from
Feb 17, 2023
Merged

Conversation

vtjeng
Copy link
Owner

@vtjeng vtjeng commented Feb 17, 2023

Cbc cannot be properly silenced (jump-dev/Cbc.jl#168); using HiGHS avoids the need for the workaround in #131 of using optimize_silent!.

Diff excluding #131 for easier comparison: 243ec69...vtjeng/silence-output-w-HiGHS.

This increases test times slightly, but is worth it to avoid the log spam without additional complexity.

Cbc cannot be properly silenced (jump-dev/Cbc.jl#168); using HiGHS avoids the need for the workaround in #131 of using `optimize_silent!`.
@vtjeng vtjeng force-pushed the vtjeng/silence-output-w-HiGHS branch from 9d240aa to 90245b0 Compare February 17, 2023 07:06
@vtjeng vtjeng enabled auto-merge (squash) February 17, 2023 07:24
@vtjeng vtjeng merged commit 9383d59 into master Feb 17, 2023
@vtjeng vtjeng deleted the vtjeng/silence-output-w-HiGHS branch February 17, 2023 07:25
@odow
Copy link

odow commented Feb 17, 2023

Nice

This increases test times slightly

Do solve times noticeably increase? Or is this some other difference?

@vtjeng
Copy link
Owner Author

vtjeng commented Feb 18, 2023

We verified that solve times do increase noticeably, particularly for https://github.com/vtjeng/MIPVerify.jl/blob/c73137e4e8ef2366633768de854b9198ed9576df/test/batch_processing_helpers/integration.jl

Our comparison runs:

Each julia-actions/julia-runtest@v1block contains a timing block like the following at the end

────────────────────────────────────────────────────────────────────────────────
                                        Time                    Allocations      
                               ───────────────────────   ────────────────────────
       Tot / % measured:             415s /  99.3%           19.8GiB /  98.8%    

 Section               ncalls     time    %tot     avg     alloc    %tot      avg
 ────────────────────────────────────────────────────────────────────────────────
 batch_processing_h...      1     144s   35.1%    144s   4.19GiB   21.4%  4.19GiB
   integration.jl           1     140s   34.0%    140s   3.42GiB   17.5%  3.42GiB
   unit.jl                  1    4.29s    1.0%   4.29s    791MiB    4.0%   791MiB
 [...]
 ────────────────────────────────────────────────────────────────────────────────

This is measured via

macro timed_testset(name::String, block)
# copied from https://github.com/KristofferC/Tensors.jl/blob/master/test/runtests.jl#L8
return quote
@timeit "$($(esc(name)))" begin
@testset "$($(esc(name)))" begin
$(esc(block))
end
end
end
end
and
TestHelpers.print_timer()

While there is some variability in I/O (we download a dataset as part of the tests), the differences are significant enough to be notable.

NOTE: Each "solve" consists of multiple instances of calling optimize! to determine tight bounds to the inputs to nodes within the neural network (see code), and a single optimize! call at the end. I haven't spent time profiling where the bulk of the time is spent.

Time in batch_processing_h...

Narrowing it down to one test file, we see that https://github.com/vtjeng/MIPVerify.jl/blob/master/test/batch_processing_helpers.jl sees a big difference; the primary change is in the time to process https://github.com/vtjeng/MIPVerify.jl/blob/master/test/batch_processing_helpers/integration.jl:

Job Name Cbc.jl / s HiGHS.jl / s
Julia 1.6 - macos-latest 39.8 133
Julia 1.6 - ubuntu-latest 34.1 144
Julia 1.6 - windows-latest 34.0 115
Julia 1 - macos-latest 67.0 150
Julia 1 - ubuntu-latest 51.8 126
Julia 1 - windows-latest 52.6 132

Additional Stats

Job Time

Job Name Cbc.jl / s HiGHS.jl / s
Julia 1.6 - macos-latest 675 752
Julia 1.6 - ubuntu-latest 501 726
Julia 1.6 - windows-latest 839 763
Julia 1 - macos-latest 661 807
Julia 1 - ubuntu-latest 512 596
Julia 1 - windows-latest 692 1089

Time in Test

Job Name Cbc.jl / s HiGHS.jl / s
Julia 1.6 - macos-latest 300 417
Julia 1.6 - ubuntu-latest 259 415
Julia 1.6 - windows-latest 222 344
Julia 1 - macos-latest 305 441
Julia 1 - ubuntu-latest 278 349
Julia 1 - windows-latest 270 348

@odow
Copy link

odow commented Feb 19, 2023

Interesting. Can you dump a MPS file that has a significantly different runtime between Cbc and HiGHS?

My guess is that these sorts of models are not part of MIPLIB, and so they don't show up in the benchmarking that the HiGHS team is using to guide performance.

@odow
Copy link

odow commented Feb 19, 2023

Regardless, HiGHS is simpler and maintained, so it's worth using over Cbc irrespective of performance.

@vtjeng
Copy link
Owner Author

vtjeng commented Feb 24, 2023

Regardless, HiGHS is simpler and maintained, so it's worth using over Cbc irrespective of performance.

Yes, it doesn't increase the absolute test time significantly (both give me enough time to make coffee after running the command). Furthermore, I'm happy to not have to write all the wrapper code to silence Cbc.

@vtjeng
Copy link
Owner Author

vtjeng commented Feb 24, 2023

Interesting. Can you dump a MPS file that has a significantly different runtime between Cbc and HiGHS?

My guess is that these sorts of models are not part of MIPLIB, and so they don't show up in the benchmarking that the HiGHS team is using to guide performance.

Here's a MPS file showing a significant difference: https://gist.github.com/vtjeng/991f17f1ad375def5951185ce89ab79a (Cbc v1.0.3, HiGHS v1.4.3, JuMP v1.8.1, Julia 1.7.1)

On my machine:

julia> model = read_from_file("nn_wk17a_sample9.mps")

A JuMP Model
Minimization problem with:
Variables: 2629
Objective function type: AffExpr
`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 784 constraints
`AffExpr`-in-`MathOptInterface.GreaterThan{Float64}`: 2628 constraints
`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 277 constraints
`VariableRef`-in-`MathOptInterface.Interval{Float64}`: 2490 constraints
`VariableRef`-in-`MathOptInterface.ZeroOne`: 138 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

julia> set_optimizer(model, HiGHS.Optimizer)
julia> set_silent(model)
julia> optimize!(model)
julia> model_2 = read_from_file("nn_wk17a_sample9_Cbc.mps")

[...]

julia> optimize!(model_2)

julia> solution_summary(model)
* Solver : HiGHS

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "kHighsModelStatusOptimal"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : 9.40059e-02
  Objective bound    : 9.40000e-02
  Relative gap       : 6.25160e-05

* Work counters
  Solve time (sec)   : 5.59248e+01
  Simplex iterations : -1
  Barrier iterations : -1
  Node count         : 17


julia> solution_summary(model_2)
* Solver : COIN Branch-and-Cut (Cbc)

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "Cbc_status          = finished - check isProvenOptimal or isProvenInfeasible to see if solution found (or check value of best solution)
Cbc_secondaryStatus = search completed with solution
"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : 9.40012e-02
  Objective bound    : 9.40012e-02
  Relative gap       : 0.00000e+00

* Work counters
  Solve time (sec)   : 5.59380e+00
  Node count         : 98

That shows a ~order of magnitude difference in solve times (5 vs 50s)

One possibility I was wondering about was whether this is a numerical tolerance issue. This package works with neural networks and constraints for my problem are generated by having the solver minimize/maximize the values of intermediate nodes within the neural network; so the constraints to variables look like this:

    x857      c231_1    -0.0007599326613403276

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants