Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hooks for XArray operations #1938

Open
hameerabbasi opened this issue Feb 23, 2018 · 53 comments
Open

Hooks for XArray operations #1938

hameerabbasi opened this issue Feb 23, 2018 · 53 comments
Labels
API design design question topic-arrays related to flexible array support

Comments

@hameerabbasi
Copy link

hameerabbasi commented Feb 23, 2018

In hope of cleaner dask and sparse support (pydata/sparse#1), I wanted to suggest hooks for XArray operations.

Something like the following:

try:
    import dask.array as da
    xarray.hooks.register('nansum', da.array, da.nansum)
    ...
except ImportError:
    pass


try:
    import sparse.SparseArray
    xarray.hooks.register('nansum', sparse.SparseArray, sparse.nansum)
    ...
except ImportError:
    pass

Functions would work something like the following:
(the register would fall back to Numpy if nothing is found)

  • Check type of first/primary argument.
  • Check register for function.
  • Call function

I would argue that this should be in Numpy, but it's a huge project to put it there.

@fujiisoup
Copy link
Member

Thanks for leading the development of sparse.
I'm looking forward to see it in xarray:)

Currently, our logic to support dask.array and numpy.ndarray is hard-coded everywhere.
For example, we have many computation paths for nansum,
dask.Array, np.ndarray with bottleneck, bare np.ndarray and we use our in-house implementation for object-type arrays.
The easiest way to support sparse might be to add a specific path for sparse by hard-coding again,
but it is less flexible.

Do we need to be capable of supporting other objects for future extension?
If so, we may need to start from (heavy) refactoring.

@shoyer,
Could you give any suggestion?
I am personally interested in helping this, but I may need to decide the direction first.

@hameerabbasi
Copy link
Author

hameerabbasi commented Feb 23, 2018

Then I would suggest something like the following for hooks (omitting imports):

# Registered in order of priority
xarray.interfaces.register('DaskArray', lambda ar: isinstance(ar, da.array))
xarray.hooks.register('nansum', 'DaskArray', da.nansum)

xarray.interfaces.register('SparseArray', lambda ar: isinstance(ar, sparse.SparseArray))
xarray.hooks.register('nansum', 'SparseArray', sparse.nansum)

And then, in code, call the appropriate nansum instead of np.nansum:

nansum = xarray.hooks.get(arr, 'nansum')

If you need help, I'd be willing to give it. :-) But I'm not a user of XArray, so I don't really understand the use-cases or codebase.

@shoyer
Copy link
Member

shoyer commented Feb 23, 2018

Do we need to be capable of supporting other objects for future extension?
If so, we may need to start from (heavy) refactoring.

For two array backends, it didn't make sense to write an abstraction layer for this, in part because it wasn't clear what we needed. But for three examples, it probably does -- that's the point where shared use cases become clear. Undoubtedly, there will be other cases in the future where users will want to extend xarray to handle new array types (arrays with units come to mind).

For implementing these overloads/functions, there are various possible solutions. Our current ad-hoc system is similar to what @hameerabbasi suggests -- we check the type of the first argument and use that to dispatch to an appropriate function. This has the advantage of being easy to implement for a known set of types, but a single dispatch order is not very extensible -- it's impossible to anticipate every third-party class. Recently, NumPy has moved away from this (e.g., with __array_ufunc__).

One appealing option is to make use of @mrocklin's multipledispatch library, which was originally developed for Blaze and is still in active use. Possible concerns:

  1. Performance. Import times need to be fast, and I know this is something that multipledispatch can sometimes struggle with. My guess is that this wouldn't be a problem for us, since we can rely on other dispatch mechanisms most operations (including __array_ufunc__ and Python's builtin arithmetic overrides).
  2. Dispatch for stack/concatenate: How do we handle dispatching for functions that take a list of arrays? e.g., if a list of arrays has contains any dask arrays, we need to use dask. Ideally, we would resolve the type of an object like [np.array(...), np.array(...), ..., da.Array(...)] to a mixed type like List[Union[np.ndarray, da.Array]], for which an override could be implemented.
  3. Dispatch for the first argument(s) only: This is a minor point, but some functions don't need to be dispatched on all of their arguments, e.g., sum() only really needs to dispatch on the array types but can pass other arguments like axis directly on. I suppose could simply annotate extra position arguments with object, but this will get annoying for multiple optional arguments which would all need separate implementations (if I understand multipledispatch correctly).

@mrocklin
Copy link
Contributor

Import times on multipledispatch have improved thanks to work by @llllllllll . They could probably be further improved if people wanted to invest modest intellectual effort here. Costs scale with the number of type signatures on each operation. In blaze this was very high, well into the hundreds, in our case it would be, I think, more modest around 2-10. (also, historical note, multipledispatch predates my involvement in Blaze).

When possible it would be useful to upstream these concerns to NumPy, even if we have to move faster than NumPy is able to support.

@shoyer
Copy link
Member

shoyer commented Feb 23, 2018

Dispatch for stack/concatenate is definitely on the radar for NumPy development, but I don't know when it's actually going to happen. The likely interface is something like __array_ufunc__: a special method like __array_concatenate__ is called on each element in the list, until one does not return NotImplemented. This is a different style of overloads than multipledispatch, one that is slightly simpler to implement but possibly slower and with fewer guarantees of correctness.

We only need this for a couple of operations, so in any case we can probably implement our own ad-hoc dispatch system for np.stack and np.concatenate, either along the of multipledispatch or NumPy/__array_ufunc__.

On further contemplation, overloading based on union types with a system like multipledispatch does seem tricky. It's not clear to me that there's even a well defined type for inputs to concatenate that should be dispatched to dask vs. numpy, for example. We want to let that dask handle any cases where at least one input is a dask array, but a type like List[Union[np.ndarray, da.Array]] actually matches a list of all numpy arrays, too -- unless we require an exact match for the type.

@llllllllll
Copy link

In blaze we have variadic sequences for multiple dispatch, and the List[Union] case is something we have run into. We have a type called VarArgs which takes a variadic sequence of type-arguments and represents a sequence of a unions over the arguments, for example: VarArgs[pd.Series, pd.DataFrame] is a sequence of unknown length which is known to hold either series or dataframes. With some mild metaprogramming we made it so that VarArs[pd.Series] is a subclass of VarArgs[pd.Series, pd.DataFrame], or in general, more specific sequences are subclasses of more general sequences. This means that you can solve the ambiguity by registering a dispatch for VarArgs[np.ndarray] and VarArgs[np.ndarray, da.Array] and you know that the second function can only be called if the sequence holds at least one dask array.

Here is an example of what that looks like for merge, which is concat(axis=1): https://github.com/blaze/blaze/blob/master/blaze/compute/pandas.py#L691
This is the definition of VarArgs: https://github.com/blaze/blaze/blob/master/blaze/compute/varargs.py

@shoyer
Copy link
Member

shoyer commented Feb 23, 2018

@llllllllll very cool! Is there a special trick I need to use this? I tried:

# first: pip install https://github.com/blaze/blaze/archive/master.tar.gz
import blaze.compute
from blaze.compute.varargs import VarArgs
from multipledispatch import dispatch

@dispatch(VarArgs[float])
def f(args):
  print('floats')

@dispatch(VarArgs[str])
def f(args):
  print('strings')

@dispatch(VarArgs[str, float])
def f(args):
  print('mixed')

This gives me an error when I try to use it:

>>> f(['foo'])
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/multipledispatch/dispatcher.py in __call__(self, *args, **kwargs)
    154         try:
--> 155             func = self._cache[types]
    156         except KeyError:

KeyError: (<class 'list'>,)

During handling of the above exception, another exception occurred:

NotImplementedError                       Traceback (most recent call last)
<ipython-input-5-19f52a9a1dd6> in <module>()
----> 1 f(['foo'])

/usr/local/lib/python3.6/dist-packages/multipledispatch/dispatcher.py in __call__(self, *args, **kwargs)
    159                 raise NotImplementedError(
    160                         'Could not find signature for %s: <%s>' %
--> 161                         (self.name, str_signature(types)))
    162             self._cache[types] = func
    163         try:

NotImplementedError: Could not find signature for f: <list>

@llllllllll
Copy link

llllllllll commented Feb 23, 2018

VarArgs itself is actually a type, so you need to create instances which wrap the list argument, for example:

In [1]: from blaze.compute.varargs import VarArgs

In [2]: from multipledispatch import dispatch

In [3]: @dispatch(VarArgs[float])
   ...: def f(args):
   ...:     print('floats')
   ...:     

In [4]: @dispatch(VarArgs[str])
   ...: def f(args):
   ...:     print('strings')
   ...:     

In [5]: @dispatch(VarArgs[str, float])
   ...: def f(args):
   ...:     print('mixed')
   ...:     

In [6]: f(VarArgs(['foo']))
strings

In [7]: f(VarArgs([1.0]))
floats

In [8]: f(VarArgs([1.0, 'foo']))
mixed

In [9]: VarArgs([1.0, 'foo'])
Out[9]: VarArgs[float, str]([1.0, 'foo'])

You could hide this behind a top-level function that wraps the input for the user, or register a dispatch for list which boxes and recurses into itself.

@hameerabbasi
Copy link
Author

Can't some wild metaprogramming make it so that [1.0, 'foo'] itself is an instance of VarArgs[float, str] (or be converted?)

@llllllllll
Copy link

llllllllll commented Feb 23, 2018

We could make a particular list an instance of a particular TypedVarArgs; however, multiple dispatch uses the type() of arguments as well as issubclass to do dispatching. Multiple dispatch depends on being able to partially order types to make dispatching more efficient. The constructor of VarArgs scans for the types of the elements and constructs an instance of a new (but memoized) subclass of VarArgs which encodes the element types so that issubclass works as expected. The problem is that type([1.0, 'foo']) returns just list which erases all information about the elements.

@llllllllll
Copy link

The wrapping dispatch would just look like:

@dispatch(list)
def f(args):
    return f(VarArgs(args))

@hameerabbasi
Copy link
Author

How about something like checking inside a list if something is top priority, then call a, if second priority, call b, etc.

@shoyer
Copy link
Member

shoyer commented Feb 23, 2018

Yes, I just tested out the wrapping dispatch. It works and is quite clean.

@shoyer
Copy link
Member

shoyer commented Feb 23, 2018

As for my last concern, "Dispatch for the first argument(s) only" it looks like the simple answer is that multipledispatch already only dispatches based on positional arguments. So as long as we're strict about using keyword arguments for extra parameters like axis (which is good style anyways), we only need a single overload per array type for single dispatch functions like sum().

It looks like this resolves almost all of my concerns about using multiple dispatch.

One thing that would be nice is it VarArgs is actually distributed as part of multipledispatch rather than needing to be copied separately into xarray. That would make it easier for third parties to extend our operations, by simply importing VarArgs from multipledispatch rather than importing it from somewhere internal in xarray.

@shoyer
Copy link
Member

shoyer commented Feb 23, 2018

How about something like checking inside a list if something is top priority, then call a, if second priority, call b, etc.

Usually, this is not a good idea. The problem is that it's impossible to know a global priority order across unrelated packages. It's usually better to declare valid type matches explicitly.

NumPy tried this with __array_priority__, but in practice these priority numbers are basically meaningless for all comparisons other than comparisons to the priority of NumPy arrays.

@llllllllll
Copy link

I wouldn't mind submitting this upstream, but I will defer to @mrocklin.

@mrocklin
Copy link
Contributor

I would want to see how magical it was. @llllllllll 's calibration of "mild metaprogramming" may differ slightly from my own :)

Eventually if multipledispatch becomes a dependency of xarray then we should consider changing the decision-making process away from being just me though. Relatedly, SymPy also just adopted it (by vendoring) as a dependency.

@shoyer
Copy link
Member

shoyer commented Feb 24, 2018

@mrocklin this is roughy what we would want in multipledispatch:
https://github.com/blaze/blaze/blob/master/blaze/compute/varargs.py#L20-L90

This involves metaclasses, which frankly do blow my mind a little bit. Probably the magic could be tuned down a little bit, but metaclasses are necessary at least for implementing __getitem__ syntax to create classes (and provide a few other niceties here like custom reprs and subclass checks).

@hameerabbasi
Copy link
Author

Another benefit to this would be that if XArray didn't want to support a particular library in its own code, the library itself could add the hooks.

@mrocklin
Copy link
Contributor

cc @jcrist , who has historically been interested in how we solve this problem within dask.array

@hameerabbasi
Copy link
Author

This might even help us out in Sparse for dispatch with scipy.sparse.spmatrix, numpy.ndarray, etc.

@hameerabbasi
Copy link
Author

Is there a way to handle kwargs (not with types, but ignoring them)?

@shoyer
Copy link
Member

shoyer commented Feb 24, 2018

Is there a way to handle kwargs (not with types, but ignoring them)?

Yes, muiltipledispatch already ignores all keyword arguments for purposes of dispatching.

@hameerabbasi
Copy link
Author

@llllllllll How hard would it be to make this work for star-args? I realize you could just add an extra wrapper but it'd be nice if you didn't have to.

@hameerabbasi
Copy link
Author

hameerabbasi commented Feb 24, 2018

Something like @starargswrapper that would just cast to list, and call the VarArgs version.

Actually it'd be nice to have something like @dispatch(int, str, StarArgs[int]).

@shoyer
Copy link
Member

shoyer commented Feb 25, 2018

I spent some time thinking about this today. The cleanest answer is probably support for standard typing annotations in multipledispatch, at least for List. This is already being pursued for multipledispatch in mrocklin/multipledispatch#69.

@llllllllll
Copy link

Given the issues raised on that PR as well as the profiling results shown here I think that PR will need some serious work before it could be merged.

@llllllllll
Copy link

@hameerabbasi This really doesn't work with *args due to how multiple dispatch itself works. What we have done in blaze is make top-level functions that accept *args which directly call dispatched functions passing the tuple.

@shoyer
Copy link
Member

shoyer commented Feb 25, 2018 via email

@hameerabbasi
Copy link
Author

Which really is totally fine -- this is all a stop gap measure until NumPy itself supports this sort of duck typing.

You're assuming here most users of XArray would be using a recent version of Numpy... Which is a totally fine assumption IMO. We make the same one for sparse.

However, consider that some people may be using something like conda, which (because of complex dependencies and all) may end up delaying updates (both for Numpy and XArray).

I guess however; if people really wanted the updates they could use pip.

so I'm not sure it's worth enshrining in multipledispatch either

I would say a little clean-up with some extra decorators for exactly this purpose may be in order, that way, individual wrapping functions aren't needed.

@mrocklin
Copy link
Contributor

In pydata/sparse#1 (comment) @shoyer mentions that some work could likely progress in XArray before deciding on the VarArgs in multipledispatch. If XArray maintainers have time it might be valuable to lay out how that would look so that other devs can try it out.

@shoyer
Copy link
Member

shoyer commented Apr 19, 2018

I'm thinking it could make sense to build this minimal library for "duck typed arrays" with multipledispatch outside of xarray. That would make it easier for library builders to use and extend it. Anyone interested in getting started o nthat?

@hameerabbasi
Copy link
Author

By minimal library, I'm assuming you mean something of the sort discussed about abstract arrays? What functionality would such a library have?

@shoyer
Copy link
Member

shoyer commented Apr 19, 2018

Basically, the library would define functions like concatenate (everything in the linked sparse issue) using muktipledy with implementations for numpy, dask, sparse, etc.

@shoyer
Copy link
Member

shoyer commented Apr 19, 2018

By "muktipledy" I mean "multipledispatch"(on my phone)

@shoyer
Copy link
Member

shoyer commented Apr 19, 2018

This library would have hard dependencies only on numpy and multipledispatch, and would expose a multipledispatch namespace so extending it doesn't have to happen in the library itself.

@mrocklin
Copy link
Contributor

mrocklin commented Apr 19, 2018 via email

@shoyer
Copy link
Member

shoyer commented Apr 20, 2018

I like duckarray a little better without the underscore.

Should we go ahead and start pydata/duckarray? Or is it better to incubate in somebody's personal repo?

@hameerabbasi
Copy link
Author

hameerabbasi commented Apr 20, 2018

I've created one, as per your e-mail: https://github.com/hameerabbasi/arrayish

The name is inspired from a recent discussion about this on the Numpy mailing list.

@mrocklin
Copy link
Contributor

mrocklin commented Apr 20, 2018 via email

@mrocklin
Copy link
Contributor

mrocklin commented Apr 20, 2018 via email

@hameerabbasi
Copy link
Author

I've written it up and already released version 0.0.1 on PyPI, except concatenate and stack (which need TypedSequence). I can still change the name, but I'd rather not.

Also, import duckarray as da conflicts with import dask.array as da.

@mrocklin
Copy link
Contributor

Thanks for taking the initiative here @hameerabbasi ! It's good to see something up already.

Here is a link to the discussion that I think @hameerabbasi is referring to: http://numpy-discussion.10968.n7.nabble.com/new-NEP-np-AbstractArray-and-np-asabstractarray-tt45282.html#none

I haven't read through that entirely yet, was arrayish decided on by the community or was the term still up for discussion?

@hameerabbasi
Copy link
Author

Let's move this discussion over to hameerabbasi/arrayish#1. But, in summary, I got the impression that the community in general is unhappy with the name "duck arrays".

@rabernat
Copy link
Contributor

I am sitting in the SciPy talk about CuPy. Would be great if someone could give us an update on how this issue stands before tomorrow's xarray sprint.

Someone my want to try plugging CuPy arrays into xarray. But this issue doesn't really resolve the best way to do that. As far as I can tell @hameerabbasi's "arrayish" project was deprecated in favor of uarray / unumpy.

What is the best path forward as of today, July 12, 2019?

@rabernat rabernat added the topic-arrays related to flexible array support label Jul 12, 2019
@hameerabbasi
Copy link
Author

uarray/unumpy is shaping up nicely. 😄

@rabernat
Copy link
Contributor

@hameerabbasi - are you at SciPy by any chance?

@mrocklin
Copy link
Contributor

@jacobtomlinson got things sorta-working with NEP-18 and CuPy in an afternoon in Iris (with a strong emphasis on "kinda").

On the CuPy side you're fine. If you're on NumPy 1.16 you'll need to enable the __array_function__ interface with the following environment variable:

export NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1

If you're using Numpy 1.17 then this is on by default.

I think that most of the work here is on the Xarray side. We'll need to remove things like explicit type checks.

@hameerabbasi
Copy link
Author

@rabernat I can attend remotely.

@shoyer
Copy link
Member

shoyer commented Jul 12, 2019

We're at the point where this could be hacked together pretty quickly:

  1. We need to remove the explicit casting to NumPy arrays (ala Picking up #1118: Do not convert subclasses of ndarray unless required #2956). Checking for an __array_function__ attribute is probably a good heuristic for duck arrays (it's what dask is using).
  2. Internally, we need to use NumPy functions directly (if __array_function__ is enabled) instead of our current Dask/NumPy versions. Fortunately, pretty much all this logic lives in one place, in xarray.core.duck_array_ops.
  3. We'll need to think a little bit about indexing in particular. Right now we have special indexing wrappers for NumPy arrays and Dask arrays; we would need to decide how to handle arbitrary array objects (probably by indexing them like NumPy arrays?). Basic indexing should work either way, but indexing with arrays can be a little tricky since few duck-array types support NumPy's full semantics (which are pretty complex).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API design design question topic-arrays related to flexible array support
Projects
None yet
Development

No branches or pull requests

7 participants