-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement generic operator type #265
Conversation
For converting to CuArrays, I think the correct thing to do is to depend on Adapt ( a lightweight dependency) and define Adapt.adapt_storage. |
I was skimming through your code. |
The reason is efficiency. It's easy to dispatch to Julia's using QuantumOptics
using BenchmarkTools
using LinearAlgebra
N = 50
b_cavity = FockBasis(N-1)
b_atom = SpinBasis(1//2)
b = b_cavity ⊗ b_atom
a = destroy(b_cavity)⊗one(b_atom)
s = one(b_cavity)⊗sigmam(b_atom)
H = a'*a + s'*s + a'*s + a*s' + a + a'
rho = dm(fockstate(b_cavity, 0) ⊗ spindown(b_atom))
drho = copy(rho) you get: julia> @benchmark QuantumOpticsBase.mul!($drho,$H,$rho)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 43.048 μs (0.00% GC)
median time: 50.777 μs (0.00% GC)
mean time: 51.283 μs (0.00% GC)
maximum time: 128.125 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 1
julia> @benchmark mul!($drho.data,$H.data,$rho.data)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 52.919 μs (0.00% GC)
median time: 55.577 μs (0.00% GC)
mean time: 56.674 μs (0.00% GC)
maximum time: 171.759 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 1 I still need to check for very large matrices, but leaving things like this for now at least doesn't introduce any performance regressions. |
Codecov Report
@@ Coverage Diff @@
## master #265 +/- ##
==========================================
- Coverage 98.49% 98.15% -0.35%
==========================================
Files 16 16
Lines 1131 1139 +8
==========================================
+ Hits 1114 1118 +4
- Misses 17 21 +4
Continue to review full report at Codecov.
|
This replaces
DenseOperator
andSparseOperator
by a genericOperator
type with a corresponding.data
field. The important point though is that this new type allows arbitrary types which implement Julia'sAbstractArray
interface in its.data
field (the same goes forSuperOperator
types). For usage withtimeevolution
a 5-argmul!
implementation of the data type is required. Similarly,StateVector
types now allow<:AbstractVector
data. This is my answer to #236 but also has some other consequences, such as:Lazy adjoints:
The adjoint of an
Operator
is simply anOperator
with a data field<: Adjoint
, which is lazy. Things should therefore be slightly more memory efficient. Note that the same does not apply toStateVector
types (Bra
is not the lazy adjoint ofKet
). The way they are implemented makes it a bit tricky to use lazy adjoints, and I'm not sure whether it makes a lot of sense. If we decide to do this, it should be the subject of a different PR.Sparse density operators:
tmeevoultion.master
now merely requires the staterho
to be of typeOperator{<:Basis,<:Basis}
, so the data here can be sparse. Note, however, that this will be quite slow due to the in-place updating of sparse matrices.Support for CUDA (CuArrays):
Since
CuArray<:AbstractArray
one can now use them within the scope of QO.jl like so:However, this "feature" should be considered experimental (haven't had the opportunity to properly test yet). Another open question here would be a short-hand function for the creation of such Operators and States, e.g. a dispatch on
cu
. But I don't know how to do that without adding CuArrays as dependency, which I would like to avoid.Similar to the already implemented
FFTOperator
and lazy operators (LazySum
,LazyProduct
,LazyTensor
) one can now also useLinearMap
andLazyArray
types in theOperator
data field. At some point down the line, we may want to consider replacing our lazy types by these.Another nice thing is that after this PR is merged we can implement the efficient iterative steady-state solver (cf. #252) without compromising on any features.
Note, that this will probably break depending packages since any dispatch on
::SparseOperator
and::DenseOperator
will error. However, there are still functions calledSparseOperator
andDenseOperator
which return anOperator
with a corresponding data field similar to before. Also, there are short-hand constants forOperator
types with sparse and dense data calledSparseOpType
andDenseOpType
, respectively (again the same goes forSparseSuperOperator
andDenseSuperOperator
).There are still more tests to be done before merging this. Also, the examples and the documentation need to be updated.