Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace magic constant test for linear solve on triangular matrices #5726

Merged
merged 3 commits into from
Feb 8, 2014
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,8 @@ Library improvements

* Mutating linear algebra functions no longer promote ([#5526]).

* `condskeel` for Skeel condition numbers ([#5726]).

* Sparse linear algebra

* Faster sparse `kron` ([#4958]).
Expand Down Expand Up @@ -233,6 +235,7 @@ Deprecated or removed
[#5475]: https://github.com/JuliaLang/julia/pull/5475
[#5526]: https://github.com/JuliaLang/julia/pull/5526
[#5538]: https://github.com/JuliaLang/julia/pull/5538
[#5726]: https://github.com/JuliaLang/julia/pull/5726

Julia v0.2.0 Release Notes
==========================
Expand Down
1 change: 1 addition & 0 deletions base/exports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -590,6 +590,7 @@ export
cholpfact!,
cholpfact,
cond,
condskeel,
cross,
ctranspose,
det,
Expand Down
1 change: 1 addition & 0 deletions base/linalg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ export
cholpfact,
cholpfact!,
cond,
condskeel,
copy!,
cross,
ctranspose,
Expand Down
6 changes: 6 additions & 0 deletions base/linalg/generic.jl
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,12 @@ end
cond(x::Number) = x == 0 ? Inf : 1.0
cond(x::Number, p) = cond(x)

#Skeel condition numbers
condskeel(A::AbstractMatrix, p::Real=Inf) = norm(abs(inv(A))*abs(A), p)
condskeel{T<:Integer}(A::AbstractMatrix{T}, p::Real=Inf) = norm(abs(inv(float(A)))*abs(A), p)
condskeel(A::AbstractMatrix, x::AbstractVector, p::Real=Inf) = norm(abs(inv(A))*abs(A)*abs(x), p)
condskeel{T<:Integer}(A::AbstractMatrix{T}, x::AbstractVector, p::Real=Inf) = norm(abs(inv(float(A)))*abs(A)*abs(x), p)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiahao, why were the Integer versions needed, since inv(A) already converts to float?

Note also that abs(inv(A))*abs(A)*abs(x) should be abs(inv(A))*(abs(A)*abs(x)) in order to perform matvec and not matmul operations.

See also the discussion in #17302.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @Sacha0


function issym(A::AbstractMatrix)
m, n = size(A)
m==n || return false
Expand Down
16 changes: 13 additions & 3 deletions doc/stdlib/linalg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Linear algebra functions in Julia are largely implemented by calling functions f
.. function:: \\(A, B)
:noindex:

Matrix division using a polyalgorithm. For input matrices ``A`` and ``B``, the result ``X`` is such that ``A*X == B`` when ``A`` is square. The solver that is used depends upon the structure of ``A``. A direct solver is used for upper- or lower triangular ``A``. For Hermitian ``A`` (equivalent to symmetric ``A`` for non-complex ``A``) the BunchKaufman factorization is used. Otherwise an LU factorization is used. For rectangular ``A`` the result is the minimum-norm least squares solution computed by reducing ``A`` to bidiagonal form and solving the bidiagonal least squares problem. For sparse, square ``A`` the LU factorization (from UMFPACK) is used.
Matrix division using a polyalgorithm. For input matrices ``A`` and ``B``, the result ``X`` is such that ``A*X == B`` when ``A`` is square. The solver that is used depends upon the structure of ``A``. A direct solver is used for upper- or lower triangular ``A``. For Hermitian ``A`` (equivalent to symmetric ``A`` for non-complex ``A``) the ``BunchKaufman`` factorization is used. Otherwise an LU factorization is used. For rectangular ``A`` the result is the minimum-norm least squares solution computed by reducing ``A`` to bidiagonal form and solving the bidiagonal least squares problem. For sparse, square ``A`` the LU factorization (from UMFPACK) is used.

.. function:: dot(x, y)

Expand Down Expand Up @@ -272,7 +272,7 @@ Linear algebra functions in Julia are largely implemented by calling functions f

Compute the ``p``-norm of a vector or the operator norm of a matrix ``A``, defaulting to the ``p=2``-norm.

For vectors, ``p`` can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, `norm(A, Inf)`` returns the largest value in ``abs(A)``, whereas ``norm(A, -Inf)`` returns the smallest.
For vectors, ``p`` can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, ``norm(A, Inf)`` returns the largest value in ``abs(A)``, whereas ``norm(A, -Inf)`` returns the smallest.

For matrices, valid values of ``p`` are ``1``, ``2``, or ``Inf``. Use :func:`normfro` to compute the Frobenius norm.

Expand All @@ -282,7 +282,17 @@ Linear algebra functions in Julia are largely implemented by calling functions f

.. function:: cond(M, [p])

Matrix condition number, computed using the ``p``-norm. Valid values for ``p`` are ``1``, ``2`` (default), or ``Inf``.
Condition number of the matrix ``M``, computed using the operator ``p``-norm. Valid values for ``p`` are ``1``, ``2`` (default), or ``Inf``.

.. function:: condskeel(M, [x, p])

.. math::
\kappa_S(M, p) & = & \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \right\Vert_p \\
\kappa_S(M, x, p) & = & \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \left\vert x \right\vert \right\Vert_p

Skeel condition number :math:`\kappa_S` of the matrix ``M``, optionally with respect to the vector ``x``, as computed using the operator ``p``-norm. ``p`` is ``Inf`` by default, if not provided. Valid values for ``p`` are ``1``, ``2``, or ``Inf``.

This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.

.. function:: trace(M)

Expand Down
32 changes: 29 additions & 3 deletions test/linalg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -191,13 +191,39 @@ debug && println("Solve square general system of equations")
x = a \ b
@test_approx_eq_eps a*x b 80ε

debug && println("Solve upper trianguler system")
debug && println("Solve upper triangular system")
x = triu(a) \ b
@test_approx_eq_eps triu(a)*x b 20000ε

#Test forward error [JIN 5705] if this is not a BigFloat
γ = n*ε/(1-n*ε)
if eltya != BigFloat
bigA = big(triu(a))
̂x = bigA \ b
for i=1:size(b, 2)
@test norm(̂x[:,i]-x[:,i], Inf)/norm(x[:,i], Inf) <= abs(condskeel(bigA, x[:,i])*γ/(1-condskeel(bigA)*γ))
end
end
#Test backward error [JIN 5705]
for i=1:size(b, 2)
@test norm(abs(b[:,i] - triu(a)*x[:,i]), Inf) <= γ * norm(triu(a), Inf) * norm(x[:,i], Inf)
end

debug && println("Solve lower triangular system")
x = tril(a)\b
@test_approx_eq_eps tril(a)*x b 10000ε

#Test forward error [JIN 5705] if this is not a BigFloat
γ = n*ε/(1-n*ε)
if eltya != BigFloat
bigA = big(tril(a))
̂x = bigA \ b
for i=1:size(b, 2)
@test norm(̂x[:,i]-x[:,i], Inf)/norm(x[:,i], Inf) <= abs(condskeel(bigA, x[:,i])*γ/(1-condskeel(bigA)*γ))
end
end
#Test backward error [JIN 5705]
for i=1:size(b, 2)
@test norm(abs(b[:,i] - tril(a)*x[:,i]), Inf) <= γ * norm(tril(a), Inf) * norm(x[:,i], Inf)
end

debug && println("Test null")
if eltya != BigFloat && eltyb != BigFloat # Revisit when implemented in julia
Expand Down