Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement generic operator type #265

Merged
merged 14 commits into from
May 15, 2020
Merged

Implement generic operator type #265

merged 14 commits into from
May 15, 2020

Conversation

david-pl
Copy link
Member

@david-pl david-pl commented Apr 8, 2020

This replaces DenseOperator and SparseOperator by a generic Operator type with a corresponding .data field. The important point though is that this new type allows arbitrary types which implement Julia's AbstractArray interface in its .data field (the same goes for SuperOperator types). For usage with timeevolution a 5-arg mul! implementation of the data type is required. Similarly, StateVector types now allow <:AbstractVector data. This is my answer to #236 but also has some other consequences, such as:

  1. Lazy adjoints:
    The adjoint of an Operator is simply an Operator with a data field <: Adjoint, which is lazy. Things should therefore be slightly more memory efficient. Note that the same does not apply to StateVector types (Bra is not the lazy adjoint of Ket). The way they are implemented makes it a bit tricky to use lazy adjoints, and I'm not sure whether it makes a lot of sense. If we decide to do this, it should be the subject of a different PR.

  2. Sparse density operators:
    tmeevoultion.master now merely requires the state rho to be of type Operator{<:Basis,<:Basis}, so the data here can be sparse. Note, however, that this will be quite slow due to the in-place updating of sparse matrices.

  3. Support for CUDA (CuArrays):
    Since CuArray<:AbstractArray one can now use them within the scope of QO.jl like so:

H_cu = Operator(b, cu(ComplexF32.(H.data)))
psi0_cu = Ket(b, cu(ComplexF32.(ψ₀.data)))
tspan_cu = Float32.(tspan, RoundDown)
tout_cu_se, ψt_cu = timeevolution.schroedinger(tspan_cu, psi0_cu, H_cu)

However, this "feature" should be considered experimental (haven't had the opportunity to properly test yet). Another open question here would be a short-hand function for the creation of such Operators and States, e.g. a dispatch on cu. But I don't know how to do that without adding CuArrays as dependency, which I would like to avoid.

  1. LinearMaps and LazyArrays can be used:
    Similar to the already implemented FFTOperator and lazy operators (LazySum, LazyProduct, LazyTensor) one can now also use LinearMap and LazyArray types in the Operator data field. At some point down the line, we may want to consider replacing our lazy types by these.

Another nice thing is that after this PR is merged we can implement the efficient iterative steady-state solver (cf. #252) without compromising on any features.

Note, that this will probably break depending packages since any dispatch on ::SparseOperator and ::DenseOperator will error. However, there are still functions called SparseOperator and DenseOperator which return an Operator with a corresponding data field similar to before. Also, there are short-hand constants for Operator types with sparse and dense data called SparseOpType and DenseOpType, respectively (again the same goes for SparseSuperOperator and DenseSuperOperator).

There are still more tests to be done before merging this. Also, the examples and the documentation need to be updated.

@PhilipVinc
Copy link
Contributor

For converting to CuArrays, I think the correct thing to do is to depend on Adapt ( a lightweight dependency) and define Adapt.adapt_storage.

@PhilipVinc
Copy link
Contributor

I was skimming through your code.
Why do you still use your own gemv! for sparse-dense multiplication? can't we simply dispatch to Julia's 5-arg mul!?

@david-pl
Copy link
Member Author

david-pl commented Apr 9, 2020

The reason is efficiency. It's easy to dispatch to Julia's mul!, you just remove the specific methods for SparseOpType in operators_sparse.jl. But comparing, for example in a simple Jaynes-Cummings model:

using QuantumOptics
using BenchmarkTools
using LinearAlgebra

N = 50
b_cavity = FockBasis(N-1)
b_atom = SpinBasis(1//2)
b = b_cavity  b_atom
a = destroy(b_cavity)one(b_atom)
s = one(b_cavity)sigmam(b_atom)

H = a'*a + s'*s + a'*s + a*s' + a + a'
rho = dm(fockstate(b_cavity, 0)  spindown(b_atom))
drho = copy(rho)

you get:

julia> @benchmark QuantumOpticsBase.mul!($drho,$H,$rho)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     43.048 μs (0.00% GC)
  median time:      50.777 μs (0.00% GC)
  mean time:        51.283 μs (0.00% GC)
  maximum time:     128.125 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark mul!($drho.data,$H.data,$rho.data)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     52.919 μs (0.00% GC)
  median time:      55.577 μs (0.00% GC)
  mean time:        56.674 μs (0.00% GC)
  maximum time:     171.759 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

I still need to check for very large matrices, but leaving things like this for now at least doesn't introduce any performance regressions.

@codecov
Copy link

codecov bot commented May 15, 2020

Codecov Report

Merging #265 into master will decrease coverage by 0.34%.
The diff coverage is 97.40%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #265      +/-   ##
==========================================
- Coverage   98.49%   98.15%   -0.35%     
==========================================
  Files          16       16              
  Lines        1131     1139       +8     
==========================================
+ Hits         1114     1118       +4     
- Misses         17       21       +4     
Impacted Files Coverage Δ
src/master.jl 96.87% <91.83%> (-3.13%) ⬇️
src/stochastic_master.jl 96.87% <95.00%> (ø)
src/bloch_redfield_master.jl 95.50% <100.00%> (+0.05%) ⬆️
src/mcwf.jl 100.00% <100.00%> (ø)
src/phasespace.jl 100.00% <100.00%> (ø)
src/schroedinger.jl 88.00% <100.00%> (ø)
src/semiclassical.jl 96.42% <100.00%> (+0.04%) ⬆️
src/spectralanalysis.jl 100.00% <100.00%> (ø)
src/steadystate.jl 90.00% <100.00%> (ø)
src/stochastic_base.jl 100.00% <100.00%> (ø)
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e5188a1...9f3f190. Read the comment docs.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.4%) to 98.649% when pulling 9f3f190 on generic-operator into e5188a1 on master.

1 similar comment
@coveralls
Copy link

Coverage Status

Coverage decreased (-0.4%) to 98.649% when pulling 9f3f190 on generic-operator into e5188a1 on master.

@david-pl david-pl merged commit ac0d2fd into master May 15, 2020
@david-pl david-pl deleted the generic-operator branch July 15, 2021 08:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants