Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharding Prototype #40

Closed
wants to merge 4 commits into from
Closed

Conversation

jstriebel
Copy link

This PR is for an early prototype of sharding support in zarrita, as described in the original issue zarr-developers/zarr-python#877. It serves mainly to discuss the overall implementation approach for sharding. This PR is not meant to be merged.

This prototype

  • allows to specify shards as the number of chunks that should be contained in a shard (e.g. using arr.zeros((20, 3), chunks=(3, 3), shards=(2, 2), …)).
    One shard corresponds to one storage key, but can contain multiple chunks:
    sharding
  • ensures that this setting is persisted in the array.json config and loaded when opening an array again, adding two entries:
    • "shard_format": "indexed" specifies the binary format of the shards and allows to extend sharding with other formats later
    • "shards": [2, 2] specifies how many chunks are contained in a shard,
  • adds a IndexedShardedStore class that is used to wrap the chunk-store when sharding is enabled. This store handles the grouping of multiple chunks to one shard and transparently reads and writes them via the inner store in a binary format which is specified below. The original store API does not need to be adapted, it just stores shards instead of chunks, which are translated back to chunks by the IndexedShardedStore.
  • adds a small script sharding_test.py for demonstration purposes, this is not meant to be merged but servers to illustrate the changes.

The currently implemented file format is still up for discussion. It implements "Format 2" @jbms describes in zarr-developers/zarr-python#876 (comment).

Chunks are written successively in a shard (unused space between them is allowed), followed by an index referencing them.
The index holding an offset, length pair of little-endian uint64 per chunk, the chunks-order in the index is row-major (C) order,
e.g. for (2, 2) chunks per shard an index would look like:

| chunk (0, 0)    | chunk (0, 1)    | chunk (1, 0)    | chunk (1, 1)    |
| offset | length | offset | length | offset | length | offset | length |
| uint64 | uint64 | uint64 | uint64 | uint64 | uint64 | uint64 | uint64 |

Empty chunks are denoted by setting both offset and length to 2^64 - 1. All the index always has the full shape of all possible chunks per shard, even if they are outside of the array size.

For the default order of the actual chunk-content in a shard I'd propose to use C/F/Morton order, but this can easily be changed and customized, since any order can be read.

@alimanfoo
Copy link
Owner

Hi @jstriebel, just to say it's great to see this in action.

It might be good to raise an issue in the zarr-specs repo to discuss how to incorporate this capability into the v3 spec, if one doesn't already exist.

@jstriebel
Copy link
Author

Hi @alimanfoo, thanks for the hint! I just added zarr-developers/zarr-specs#127 for this discussion, hope this is the right place and approach.

@jstriebel
Copy link
Author

A similar proposal is now formalized as a Zarr Enhancement Proposal, ZEP 2. The basic sharding concept and the binary representation is the same, however it is formalized as a storage transformer for Zarr v3.

Further feedback is very welcome! Either

I'll close this PR to consolidate the open sharding threads, please feel free to continue any discussions or add feedback on the PRs mentioned above.

@jstriebel jstriebel closed this Aug 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants