Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A new "outer_similar" #22218

Open
ChrisRackauckas opened this issue Jun 5, 2017 · 11 comments
Open

A new "outer_similar" #22218

ChrisRackauckas opened this issue Jun 5, 2017 · 11 comments
Labels
arrays [a, r, r, a, y, s]

Comments

@ChrisRackauckas
Copy link
Member

ChrisRackauckas commented Jun 5, 2017

While trying to get generic codes working for AbstractArrays, I realized that there's a pretty crucial abstraction missing: the ability to create the same array type with a different element type.

There is a distinction to be made here. copy(A) creates a copy, so its type is the same. You cannot change the element type. Using the constructors can require different syntax for each AbstractArray for clear reasons.

Now, the clear way to go seems to be similar, but similar is not guaranteed to return the same outer array type. For one thing, it returns a similar mutable array type, so a similar(::SArray)::MArray. In addition, similar(::SharedArray)::Array for a reason that doesn't make much sense to me, but maybe that's because I am using similar differently:

#5893
#13731

Even here @tkelman noted:

I agree with the opinion that similar is underspecified and has conflicting uses, and choosing the type it should return is all too often a subjective judgement call or dependent on how many arguments it's called with. For example the case of what to return for similar(::SharedArray) has gone back and forth, ref #5893 and #12964, and I think either choice has left multiple people unsettled and disagreeing. That's a pretty good sign that something is off with the abstraction.

So I am proposing that we have a new function like similar which guarantees that the returned array matches in the outer type. While outer_similar(A) is essentially a copy, the main usages being for outer_similar(A,T) to change the element type and outer_similar(A,T,indices...) to change the size can be very useful in generic codes regardless of the arrays used.

Example

This comes up in many areas, and areas I have experienced it lately include DiffEq, numerical linear algebra, rootfinding, and neural nets. Essentially, the library code can be written all in terms of broadcasted and matrix operations. These scientific computing algorithms don't actually need to make use of indexing at all outside of a user's function: they just need to do "array-wide" operations. So as long as they can properly allocate arrays of the right type for their A .= B.*C type of algorithms, they will generally work on all of these array types (even "weird arrays" like GPUArrays which don't even have indexing defined).

Assume C is the user's input array that you want to match types with. Without this proposed outer_similar, there is no good way to define the pre-allocated A in a way that you know it matches the array types of the user input, since similar can change things. In simple algorithms, one can get by with copy. But if you want the algorithm to be unit-safe, for example here you would want to define B to match the user's input array but be unitless. In this case you would want to B=similar(C,T) where T is the unitless element-type of A. But this is where we run into the issue that similar isn't guarenteed to return the same type. If you did A = copy(C); B = similar(C,T), now you have two SharedArrays and an Array, or things like that. For StaticArrays, this is also an issue if you want to make an initial B in a mutable type which you update later, since this will silently change to an MArray instead of an SArray and worsen the performance (though in that case we will need to use A = B.*C, but that's a different issue). If we want this broadcast to be automatically distributed or parallel, this type of type-changing will cause "auto-parallelism of generic algorithms to fail".

This makes me happy because Julia is so so so close to having auto-parallelism of generic algorithms work, as shown by tests on DiffEq (SciML/DifferentialEquations.jl#176).

Current Workaround

There is a current workaround in some cases. You can pre-allocate and change types using broadcast. Want to make a unitless version? B = C./C. Need to promote the types? B = C .+ D. But this is an overly expensive way to get around the fact that similar is not the right abstract for this job.

I am open to ideas for a different name than outer_similar. It's bad, but I thought writing down this argument was better than waiting for a good name.

@JeffBezanson
Copy link
Member

Doesn't this use case require a mutable array? If so, then the fact that similar returns something mutable even if it's a different kind of array is just what's needed here. I think the wrong decision was made for SharedArray; the original purpose of similar was absolutely to return an array of the same type if possible. (Re: immutability, linking to #19992). Are there other kinds of mutable arrays where similar does the wrong thing for this purpose?

@mbauman
Copy link
Member

mbauman commented Jun 5, 2017

Another spelling is B = convert(AbstractArray{NewT}, C). This isn't guaranteed to not alias, though, so Base.LinAlg uses copy_oftype for this purpose.

@mbauman mbauman added the arrays [a, r, r, a, y, s] label Jun 5, 2017
@ChrisRackauckas
Copy link
Member Author

Doesn't this use case require a mutable array?

No. It's perfectly fine to iteratively update the values of a type regardless of the implementation detail of mutable or immutable. Simple case: initial value u0 and solving u_n = a*u_{n-1} + 1. We can do

mutable struct MyType{T<:Number,T2<:AbstractArray}
  a::T
  u::T2
end

You can do something like:

function evolve!(mt::MyType{T,T2}) where T where T2<:AbstractArray{T3} where T3<:Union{Number,SArray} # Immutable dispatch, since there's no immutable trait
  mt.u = mt.a .* mt.u .+ 1
end

function evolve!(mt::MyType)
  mt.u .= mt.a .* mt.u .+ 1
end

so that way all work with the reasonable interpretation:

mt = MyType(2.0,2.0)
mt = MyType(2.0,[2.0])
mt = MyType(2.0,@SVector([2.0]))

This is essentially a minimal example for diffeq, rootfinding, iterative numerical linear algebra solvers, optimization, and nerual nets BTW. Most packages don't have the extra code path for immutables since it can lead to a lot of code duplication, but it can be very useful if Number and small arrays make sense in applications (example: Black-Scholes SDE) (though another issue may fix this).

Now that example works with "standard numbers", but if you do this with unitful numbers it can fail due to 1 not matching units. And if you want that 1 to be some type of stored array, for example u_n = a*u_{n-1} + u_{n-2} under some interpretation where units make sense, you need to make sure that the extra value is pre-set with the correct units, whether or not it's a number, SharedArray, GPUArray, etc.

@JeffBezanson
Copy link
Member

I don't quite see how a similar-like function fits in there. If similar returned an immutable array, what would you then do with it?

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Jun 5, 2017

If similar returned an immutable array, what would you then do with it?

For the u_n = a*u_{n-1} + u_{n-2} case:

mutable struct MyType{T<:Number,T2<:AbstractArray}
  a::T
  u::T2
  u2::T3
end

Assume you want to start with u_{n} = 0 for n<0. You would want to iterate using evolve!, and update the saved u2 array, either u2 .= u and u2 = u depending on immutability, with a unit-fix in there. But the extra values are just an implementation detail, you still want the user to just provide u0 and a. So how do you get constructors

mt = MyType(2.0,2.0)
mt = MyType(2.0,[2.0])
mt = MyType(2.0,@SVector([2.0]))

to work? If you use u2=similar(u,T) (in the array case), then u2 will unnecessarily end up an MVector in a setup when everything should be immutables.

As mentioned earlier, the generic way to handle the units here would be to initialize with u2=u./a, but it seems heavy-handed to do a full array operation just to get the correct type.

@ChrisRackauckas
Copy link
Member Author

Another spelling is B = convert(AbstractArray{NewT}, C). This isn't guaranteed to not alias, though, so Base.LinAlg uses copy_oftype for this purpose.

Yes, copy_oftype is what I am looking for, and it seems to be used for a very similar purpose.

@JeffBezanson
Copy link
Member

JeffBezanson commented Jun 5, 2017

Ah, so it's a placeholder value that will never be read? I guess you could leave the field undefined at first. That's a bit unsatisfying, but creating values intended not to be used seems pretty marginal to me as well.

It's not totally clear to me if you want a copy, or just a new array with undefined or zero'd contents. If a copy, then T.(a) (where T is the new element type) might work.

@ChrisRackauckas
Copy link
Member Author

It's not totally clear to me if you want a copy, or just a new array with undefined or zero'd contents.

That completely depends on the application. But it's still all the same: an abstraction for "get me the same kind of array, but change the element type to T".

It's not totally clear to me if you want a copy, or just a new array with undefined or zero'd contents. If a copy, then T.(a) (where T is the new element type) might work.

That might actually be a good solution.

@ChrisRackauckas
Copy link
Member Author

Nevermind:

using GPUArrays
u0 = GPUArray(rand(Float32, 32, 32))
Float64.(u0)

errors, while similar(u0,Float64) does just fine.

@timholy
Copy link
Member

timholy commented Jun 5, 2017

For the case of SharedArray, perhaps the main reason for returning an array is illustrated here (with julia -p 2:

julia> sima(dims) = (a = Array{Float64}(dims); sum(a))
sima (generic function with 1 method)

julia> sims(dims) = (a = SharedArray{Float64}(dims); sum(a))
sims (generic function with 1 method)

julia> using BenchmarkTools

julia> @benchmark sima((3,3)) seconds=1 gcsample=true
BenchmarkTools.Trial: 
  memory estimate:  160 bytes
  allocs estimate:  1
  --------------
  minimum time:     38.926 ns (0.00% GC)
  median time:      39.664 ns (0.00% GC)
  mean time:        40.159 ns (0.00% GC)
  maximum time:     41.671 ns (0.00% GC)
  --------------
  samples:          5
  evals/sample:     434

julia> @benchmark sims((3,3)) seconds=1 gcsample=true
BenchmarkTools.Trial: 
  memory estimate:  13.34 KiB
  allocs estimate:  349
  --------------
  minimum time:     899.431 μs (0.00% GC)
  median time:      937.509 μs (0.00% GC)
  mean time:        948.774 μs (0.00% GC)
  maximum time:     1.021 ms (0.00% GC)
  --------------
  samples:          4
  evals/sample:     1

We create a lot of arrays for temporary purposes, and that performance gap is so extraordinary that it makes a certain amount of sense to say "if you want a shared array, you'd better create it explicitly."

I also agree that the alternative makes sense; with our current API I am not sure there's a good answer here.

@JeffBezanson
Copy link
Member

Is there any good reason for Float64.(a) to give an error for some type of array? It seems to me that should be the spelling for "copy this array to a similar array with a different element type". It can even be overloaded specifically if necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrays [a, r, r, a, y, s]
Projects
None yet
Development

No branches or pull requests

4 participants