diff --git a/draft/ZEP0007.md b/draft/ZEP0007.md new file mode 100644 index 0000000..d82954c --- /dev/null +++ b/draft/ZEP0007.md @@ -0,0 +1,371 @@ +--- +layout: default +title: ZEP0007 +description: Strings +parent: draft ZEPs +nav_order: 7 +--- + +# ZEP — string and variable-length binary data types + +--- + +``` +Author: Isaac Virshup (isaac.virshup@helmholtz-muenchen.de) + +Status: Draft + +Type: Specification + +Created: + +Discussion: (link to zarr-developers post for discussion) + +Resolution: (required for Accepted | Rejected | Withdrawn) +``` + +## Abstract + +This ZEP defines string and variable length binary datatypes. It describes the binary format for the chunks of these arrays. + +## Motivation and Scope + +> This section describes the need for the proposed change. It should describe the existing problem, who it affects, what it is trying to solve, and why. +> This section should explicitly address the scope of and key requirements for the proposed change. + +Strings are a commonly used data type in many analytic fields. They are frequenly used as labels of array dimensions or for categorical variables. + +That zarr v3 does not support a string encoding is a loss of features (even ones defined by convention) from zarr v2. Without support for strings, zarr v3 does not have the features required by downstream projects like netcdf, xarray, and anndata. + +This specification The encoding for strings here is `utf-8`, as that is the most commonly used encoding. + +### Potential additions + +* `LargeString`, `LargeBinary` data type where the offsets are 64 bit + * We could also consider making these types parametric on their offset, where the data_type of the offset buffer is specified in the [configuration](https://zarr-specs.readthedocs.io/en/latest/v3/core/v3.0.html#id9) field +* Harder: `List[T]`, `LargeList[T]` (where `T` is the inner dtype, e.g. `List[Int32]`) data types + * probably leave to seperate ZEP + +:::info + +**List type** + +One could imagine having: + +``` +List[BufferDtype, ValueDtype] +``` + +Where `String` is a specialized case of `List[Int32, UInt8]`. The buffer dtype and value type could be expressed as `configuration` values of the `data_type`. + +However it may be easier to just specialize on the two most common cases, and leave this for further discussion. Cases which make this more complex: + +* `List[List[data_type]]` +* `List[extension_type]` + +These cases make it complex enough I think this is a non-goal. +::: + +### Non goals + +This proposal explicitly excludes complex nested array structures (e.g. awkward array, sparse arrays). + +While these have similar structures (offsets + data buffers) they are more complex, and more domain specific. Text and variable length byte data are extremely common, and supported by similar formats such as `n5`, `hdf5`, `tiledb`, and `arrow`. + +Additionally, it can be useful to access the component arrays directly in these more complex structure. E.g. retriving the structure of a sparse array, accessing a single field from a ragged collection of structs. This would suggest an approach where the offset and data arrays are not bundled in a single buffer, but storing those buffers seperatley. + +This would need each chunk of a zarr Array to be composed of multiple buffers in a store – and is not compatible with the current zarr data model. + + +## Usage and Impact + +> This section describes how users of Zarr will use the new features, spec changes or a new process described in this ZEP. It should be comprised mainly of code examples that wouldn’t be possible +> without acceptance and implementation of this ZEP, as well as the impact the proposed changes would have on the ecosystem. This section should be written from +> the perspective of the users of Zarr, and the benefit it will provide them; as such, it should include implementation details only if necessary to explain the +> functionality. + +```python +g = zarr.group() +g.create_array("string_array", data=["the", "quick", "brown", "fox"]) +g["string_array"][:] +# array(["the", "quick", "brown", "fox"], dtype=object) # The return type is flexible +``` + +```python= +``` + +### String usecases + +* Text data is quite common, can be labels for dimensions, semantic data + * Text is variable length + +### Binary + +* Variable length binary data is a very flexible encoding + * This allows zarr v2 use cases, like serialized objects (e.g. the [pickle codec](https://numcodecs.readthedocs.io/en/stable/pickles.html)) + +## Backward Compatibility + +> This section describes how the ZEP breaks backward compatibility. +> +> Its purpose is to provide a high-level summary to users who are not interested in detailed technical discussion, but may have opinions around, e.g., usage and +> impact. + +This does not break any backwards compatibility with zarr v3. + +One concern may be that it will break particular implementations of zarr v3, since it introduces a new concept of variable length types. But this is an extension, so can be adopted gradually. + +There may be issues with codecs that accept or return multidimensional arrays, since the format of string arrays may not be consistent across languages. + +## Detailed description + +> This section should provide a detailed description of the proposed change. It should include examples of how the new > functionality would be used, intended +> use-cases and pseudo-code illustrating its use. + +The proposed formats borrow heavily from `arrow`'s variable length string and binary data types. + +This ZEP interprets the definition of `data_type` as a schema for each chunk after all codecs have been applied. +This means `data_types` could be thought of as the "final codec". + +The data types being proposed here are + +```json +{ + "data_type": {"name": "string"} +} +``` + +```json +{ + "data_type": {"name": "binary"} +} +``` + +These have a similar buffer layout. Each chunk is composed of + +* `chunk_size` + 1 int32 offsets (int64 for `LargeString`, `LargeBinary` types) +* 0 padding to the nearest 64 bytes (for alignment of data buffer) +* data buffer + +:::info +**Padding** + +[Reasoning for padding the buffers can be found in the documentation for the physical memory layout of arrow arrays](https://arrow.apache.org/docs/format/Columnar.html#buffer-alignment-and-padding). + + +::: + +* `fallback` for string would be `"binary"` + +### How to decode a chunk + +```python +chunk_buffer: memoryview = ... +chunk_shape: tuple[int, ...] = ... +chunk_size: int = reduce(mul, chunk_shape) + +# buffers start points are aligned to 64 bytes +def round_up_to_multiple(x, base=64): + n, rem = divmod(x, base) + if rem > 0: + n += 1 + return n * base + +offset_end_idx = (chunk_size + 1) * 4 +data_start_idx = round_up_to_multiple(offset_end_idx) + +offset_buffer = chunk_buffer[:offset_end_idx] +data_buffer = chunk_buffer[data_start_idx:] +``` + +Interpreting these chunks is a little more complicated, since it depends on the kind of array we are returning. + +Numpy is going to be the worst case, since it doesn't do variable length data: + +```python +result = np.array(shape=chunk_size, dtype=object) +for i in range(len(offsets) - 1): + # For binary: + result[i] = data_buffer[offsets[i]:offsets[i+1]] + # for string: + result[i] = str(data_buffer[offsets[i]:offsets[i+1]], "utf8") +``` + +However, this is zero copy using `pyarrow`: + +```python +import pyarrow as pa + + +# For binary +pa.Array.from_buffers( + pa.binary(), + chunk_size, + (None, pa.py_buffer(offset_buffer), pa.py_buffer(data_buffer)) +) +# For string +pa.Array.from_buffers( + pa.string(), + chunk_size, + (None, pa.py_buffer(offset_buffer), pa.py_buffer(data_buffer)) +) +``` + +And also for [awkward array](https://awkward-array.org/doc/main/): + +```python +import awkward as ak + +offset_array = np.frombuffer(chunk_buffer[:offset_end_idx], dtype=np.int32) + +# For binary +ak.Array(ak.contents.ListOffsetArray( + ak.index.Index(offset_array), + ak.contents.NumpyArray(data_buffer, parameters={"__array__": "byte"}) + parameters={"__array__": "bytestring"} +)) + +# For strings +ak.Array(ak.contents.ListOffsetArray( + ak.index.Index(offset_array), + ak.contents.NumpyArray(data_buffer, parameters={"__array__": "char"}) + parameters={"__array__": "string"} +)) +``` + + +### Writing + +#### from numpy: + +```python +offsets = np.zeros(array.size + 1, dtype=np.int32) +offsets[1:] = np.cumsum([len(x) for x in array]) +# For bytes +data_buffer = memoryview(b"".join(array)) +# For strings +data_buffer = memoryview("".join(array).encode()) + +offset_buffer = offsets.view(np.uint8) +data_start_idx = round_up_to_multiple(len(offset_buffer)) + +chunk_buffer = np.zeros(data_start_idx + len(data_buffer), dtype=np.uint8) +chunk_buffer[:len(offset_buffer)] = offset_buffer +chunk_buffer[data_start_idx:] = data_buffer +``` + + +## Related Work + +> This section should list relevant and/or similar technologies, possibly in other libraries. It does not need to be comprehensive, just list the major examples +> of prior and relevant art. + +* Arrow string type ([layout](https://arrow.apache.org/docs/format/Columnar.html#variable-size-list-layout)) +* Awkward array strings + +## Implementation + +> This section lists the major steps required to implement the ZEP. Where possible, it should be noted where one step is dependent on another, and which steps may +> be optionally omitted. Where it makes sense, each step should include a link to related pull requests as the implementation progresses. +> Any pull requests or development branches containing work on this ZEP be linked to from here. (A ZEP does not need to be implemented in a single pull request if +> it makes sense to implement it in discrete phases). + + +* This is dependent on ZEP1 +* I don't see major barriers to a initial implementation +* There will be barriers to efficient implementation in zarr-python + * E.g. efficiently returning a pyarrow or awkward array does not fit with the current model + * However it should be possible by accessing decoded chunks directly + +## Alternatives + +> If there were any alternative solutions to solving the same problem, they should be discussed here, along with a justification for the chosen approach. + +### Different buffer layout: some termination character + +*Pros* + +* All information needed for a string is contiguous in memory +* Current implementation (maybe limited to zarr python?) +* Since offsets are not cumulative, fewer restrictions on size of buffer + +*Cons* + +* No random access within chunk buffer (or memory mapped chunk) +* Difficult to recover from corrupted buffer +* No zero-copy interpretation (AFAIK) + +### Multiple arrays (e.g. `awkward.to_buffers`) + +*Pros* + +* Potential for on disk random access via two requests + +*Cons* + +* Does not fit the zarr data_type model. A chunk is no longer a single buffer. + +### Utf8View-like + +Still investigating this, but it is a new type being introduced to arrow for representing strings. + +* https://lists.apache.org/thread/w88tpz76ox8h3rxkjl4so6rg3f1rv7wt +* https://lists.apache.org/thread/49qzofswg1r5z7zh39pjvd1m2ggz2kdq +* PR: https://github.com/apache/arrow/pull/35628 + +> PR #0 [3] adds Utf8View and BinaryView types to the C++ implementation and to the IPC format. For the purposes of IPC raw pointers are not used. Instead, each view contains a pair of 32 bit unsigned integers which encode the index of a character buffer (string view arrays may consist of a variable number of such buffers) and the offset of a view's data within that buffer respectively. Benefits of this substitution include: - This makes explicit the guarantee that lifetime of all character data is equal to that of the array which views it, which is critical for confident consumption across an interface boundary. - As with other types in the arrow format, such arrays are serializable and venue agnostic; directly usable in shared memory without modification. - Indices and offsets are easily validated. + +The offset array is now larger per-record and there are a variable number of data buffers (with 0 being possible). From the PR: + +``` +* Short strings, length <= 12 + | Bytes 0-3 | Bytes 4-15 | + |------------|---------------------------------------| + | length | data (padded with 0) | + +* Long strings, length > 12 + | Bytes 0-3 | Bytes 4-7 | Bytes 8-11 | Bytes 12-15 | + |------------|------------|------------|-------------| + | length | prefix | buf. index | offset | +``` + +Big benefits from this approach are: + +* out of order construction + * Since you can refer to new buffers, have different offsets + * Simple example: use four threads per chunk, each thread creates it's own buffer +* Using length + offset + * You don't have to know the sizes of all strings + * Some operations are now inplace +* Comparisons/ sort + * These benefit from the inline/ prefix + +#### Why not + +* Arrow supports this format along side the format specified here. So this could be supported in addition. +* The benefits largely apply to in-memory operations. E.g. operations within a chunk. +* It's signifigantly more complicated to implement. Especially efficiently. + +### unsigned integers for offsets + +In theory, one could use unsigned integers for offsets. However, I would propose we use signed integers. + +* Frequently, one needs to do arithmetic on the buffer offsets. It's much safer to do this with signed numbers. +* Arrow explicitly uses signed integers for the offsets, and a big advantage to this proposal is binary compatibility with Arrow. +* Most data should work with 31 bits (e.g. chunk sizes of ~2 gigabytes). If it doesn't a `LargeString`, `LargeBinary` type with int64 offsets can be used. + +## Discussion + +> This section should have links related to any discussion regarding the ZEP. It could be GitHub issues and/or discussions. (The links to discussions in past +if any, goes in this section.) + +* https://github.com/zarr-developers/zarr-specs/issues/83 + +## References and Footnotes + +> Each ZEP must either be explicitly labelled as placed in the public domain (see this ZEP as an example) or licensed under the +> [Open Publication License](https://www.opencontent.org/openpub/). + +## Copyright + +This document has been placed in the public domain. \ No newline at end of file