Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run on Cubed #908

Open
tomwhite opened this issue Sep 22, 2022 · 5 comments
Open

Run on Cubed #908

tomwhite opened this issue Sep 22, 2022 · 5 comments
Labels
dispatching Issues related to how we send method calls to different backends

Comments

@tomwhite
Copy link
Collaborator

This is an umbrella issue to track the work needed to run sgkit on Cubed.

This is possible because Cubed exposes the Python array API standard as well as common Dask functions and methods like map_blocks and Array.compute. Also, there is ongoing work to Integrate cubed in xarray, as a part of exploring alternative parallel execution frameworks in xarray.

@tomwhite tomwhite added the dispatching Issues related to how we send method calls to different backends label Sep 22, 2022
@tomwhite
Copy link
Collaborator Author

I've managed to get some basic aggregation tests in test_aggregation.py passing with the changes here: tomwhite@83ff400. This is not to be merged as it's just a demonstration at the moment. Most of the changes are due to the array API being stricter on types (so it needs some explicit casts).

They rely on some changes in xarray too: pydata/xarray#7067.

@tomwhite
Copy link
Collaborator Author

Also, this example shows that Cubed works with Numba (locally at least), which answers @hammer's question here: https://github.com/pystatgen/sgkit/issues/885#issuecomment-1209288596.

@tomwhite
Copy link
Collaborator Author

Since I opened this issue almost two years ago, Xarray has added a chunk manager abstraction (https://docs.xarray.dev/en/stable/internals/chunked-arrays.html), which makes it a lot easier to switch from Dask to Cubed as the backend computation engine without changing the code to express the computation. The nice thing about this approach is that we can use Dask or Cubed or any other distributed array engine that Xarray might support in the future (such as Arkouda).

I've started to explore what this might look like in https://github.com/tomwhite/sgkit/tree/xarray-apply-ufunc, but the two main ideas are:

  1. move from dask.array.map_blocks to xarray.apply_ufunc for applying functions in parallel,
  2. have a way to run the test suite using either Dask or Cubed so we don't have to do all the changes at once.

The code in the branch does this for count_call_alleles. As you can see in this commit (3833982), another minor benefit of using xarray.apply_ufunc is we can use named dimensions like ploidy and alleles rather than dimension indexes like 2.

This commit (da8657e) shows the new pytest command-line option to run on cubed: --use-cubed.

I would be interested in any thoughts on this direction @jeromekelleher, @hammer, @timothymillar, @benjeffery, @ravwojdyla, @eric-czech.

I'd like to set up a CI workflow that adds --use-cubed and runs just the tests for count_call_alleles to start with, before expanding to cover more of sgkit's aggregation functions.

@tomwhite
Copy link
Collaborator Author

Here's a successful run for the count_call_alleles tests on Cubed: https://github.com/tomwhite/sgkit/actions/runs/10455603818/job/28950946965

@jeromekelleher
Copy link
Collaborator

This sounds like an excellent approach +1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dispatching Issues related to how we send method calls to different backends
Projects
None yet
Development

No branches or pull requests

2 participants