Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tril_to_full! function in QR and LU algorithms error on GPU #181

Closed
dlcole3 opened this issue Jun 28, 2022 · 3 comments · Fixed by #190
Closed

tril_to_full! function in QR and LU algorithms error on GPU #181

dlcole3 opened this issue Jun 28, 2022 · 3 comments · Fixed by #190

Comments

@dlcole3
Copy link

dlcole3 commented Jun 28, 2022

Currently, I get an error when trying to run with MadNLPLapackGPU.QR or MadNLPLapackGPU.LU. The tril_to_full!() function is only defined for objects of type Matrix, so the function doesn't work with CuArray types on the GPU. An example script that results in this error is shown bleow.

using MadNLP, ADNLPModels, MadNLPGPU, CUDA, NLPModels

function MadNLP.jac_dense!(nls, x, jac) 
    NLPModels.increment!(nls, :neval_jac)
    copyto!(jac, zeros(size(jac)))
end

function MadNLP.hess_dense!(nls, x, w1l, hess; obj_weight = 1.0)
    NLPModels.increment!(nls, :neval_hess)
    copyto!(hess, [0 0; -20 0])
end

F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)]
x0 = [-1.2; 1.0]
nls = ADNLSModel(F, x0, 2)

madnlp_options = Dict{Symbol, Any}(
:kkt_system=>MadNLP.DENSE_CONDENSED_KKT_SYSTEM,
:linear_solver=>MadNLPLapackGPU,
:print_level=>MadNLP.DEBUG,
)

linear_solver_options = Dict{Symbol, Any}(
:lapackgpu_algorithm => MadNLPLapackGPU.QR,
)

TKKTGPU = MadNLP.DenseCondensedKKTSystem{Float64, CuVector{Float64}, CuMatrix{Float64}}
opt = MadNLP.Options(; madnlp_options...)
ips = MadNLP.InteriorPointSolver{TKKTGPU}(nls, opt; option_linear_solver=copy(linear_solver_options))
MadNLP.optimize!(ips)
@sshin23
Copy link
Member

sshin23 commented Jul 1, 2022

even madnlp(nls) doesn't work, and it seems that the issue is caused by

get_nnzj(nls) != get_nnzj(nls.meta)

@dpo @abelsiqueira is this an expected behavior? If a solver wants to solve NLS problem as a generic NLP, does the solver need to query nnzj and nnzh from nls.meta, not nls?

@dpo
Copy link

dpo commented Jul 1, 2022

@sshin23 get_nnzj(nls) is the same as get_nnzj(nls.nls_meta), which should return the number of nonzeros in the Jacobian of the residual:
https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/main/src/nls/tools.jl#L11

By contrast, get_nnzj(nls.meta) would return the number of nonzeros in the Jacobian of the constraints (not counting bound constraints), if your NLSModel had any. In addition, nnzj was recently split into lin_nnzj and nln_nnzj (linear/nonlinear constraints).

@sshin23
Copy link
Member

sshin23 commented Jul 1, 2022

Thanks, @dpo

Then MadNLP should always query nnz information from nlp.meta. @dlcole3 feel free to create a PR for the fix and adding this to MadNLPTest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants