Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constraint evaluation behavior for unconstrained problems #464

Open
tmigot opened this issue Jun 1, 2024 · 3 comments · May be fixed by #465
Open

Constraint evaluation behavior for unconstrained problems #464

tmigot opened this issue Jun 1, 2024 · 3 comments · May be fixed by #465

Comments

@tmigot
Copy link
Member

tmigot commented Jun 1, 2024

This is a corner case, where we should probably on the behavior.

We have several ways to access the constraints function: cons!/cons, cons_nln and cons_lin, and objcons. How should these behave when applied on an unconstrained problems? I see 3 options:

  • Business as usual: evaluate cons (hoping it doesn't break in case the NLPModel has not implemented this function - but that's a normal error in this case), and increase counters.
  • Increase counters (because the user called the function), but try to skip the evaluation.
function cons!(nlp::AbstractNLPModel, x::AbstractVector, cx::AbstractVector)
  @lencheck nlp.meta.nvar x
  @lencheck nlp.meta.ncon cx
  increment!(nlp, :neval_cons)
  nlp.meta.nlin > 0 && cons_lin!(nlp, x, view(cx, nlp.meta.lin))
  nlp.meta.nnln > 0 && cons_nln!(nlp, x, view(cx, nlp.meta.nln))
  return cx
end
  • Ignore completely the call to cons.
function cons!(nlp::AbstractNLPModel, x::AbstractVector, cx::AbstractVector)
  @lencheck nlp.meta.nvar x
  @lencheck nlp.meta.ncon cx
  nlp.meta.ncon > 0 && increment!(nlp, :neval_cons)
  nlp.meta.nlin > 0 && cons_lin!(nlp, x, view(cx, nlp.meta.lin))
  nlp.meta.nnln > 0 && cons_nln!(nlp, x, view(cx, nlp.meta.nln))
  return cx
end
  • another option?

A related question is: How should objcons react to this situation?

Connected to JuliaSmoothOptimizers/NLPModelsTest.jl#26 and JuliaSmoothOptimizers/CUTEst.jl#327

@dpo
Copy link
Member

dpo commented Jun 1, 2024

If a solver tries to solve an unconstrained problem and, in doing so, evaluates the constraints, there is a serious issue with that solver. We should not encourage that kind of behavior. I think cons!() should return an error in that case.

For objcons, the solution I see is easy: we should have objcons <=> obj for unconstrained problems.

@tmigot
Copy link
Member Author

tmigot commented Jun 9, 2024

I agree with the first point, let's add a new error for unconstrained problem when we call the constraints, and error for linear and nonlinear constraints too.

I have a more mixed opinion on the objcons. Shouldn't this return an error too? Because it is obj and cons, in my mind this function is just trying to optimize both call depening on the models but essentially it is "similar" to call both functions. What I am trying to say if that it feels misleading to call objcons and irgnoring cons.

If we keep the version ignoring cons, then we need to update the docstring that right now says:

Evaluate ``f(x)`` and ``c(x)`` at `x`.

https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/bfd4ceafec9ad6f58cbf11a4a88f2e61767ce701/src/nlp/api.jl#L117C1-L117C39

@dpo
Copy link
Member

dpo commented Jun 14, 2024

Yes I think objcons should also error if the problem is unconstrained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants