-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LinearAlgebra.norm(x)
falls back to generic implementation for x::Transpose
and x::Adjoint
#1782
Comments
Is there a reason Base doesn't have norm dispatch to the parent array for Adjoint/Transpose? |
I'll look into it, but it's possible there is trickiness due to With that being said, there is already a similar definition for So maybe this should be a PR to |
A more general fix has been merged into base now 👍🏻. I'm not sure what, if anything, you would like to do about this issue in the meantime. The patch proposed above could, for example, be defined only if |
Yeah that'd be a good workaround. Would probably belong in GPUArrays though, where we define |
Ahh I see, yes that makes more sense; best to be as generic as possible. I'm not sure which Julia version will be the cutoff, though, but I can ask. |
This seems to work now. |
Describe the bug
LinearAlgebra.norm(x)
falls back to the generic implementation forx::Transpose
andx::Adjoint
, as the following method is only defined onDenseCuArray
s:CUDA.jl/lib/cublas/linalg.jl
Lines 106 to 112 in a4ddc54
To reproduce
The Minimal Working Example (MWE) for this bug:
Manifest.toml
Expected behavior
The norm of the parent array should be returned, as
norm(x, p) == norm(parent(x), p)
forx::Transpose
orx::Adjoint
with real or complex elements.For example, defining the following methods fixes the issue:
Version info
Details on Julia:
Details on CUDA:
Additional context
I would be happy to make a quick PR with the above patch if it seems reasonable.
The text was updated successfully, but these errors were encountered: