-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation #6174
Comments
If you've read through all of #4118 you will have seen that there is a prototype package providing a nested data structure which can handle groups. Using from datatree import DataTree
dt = DataTree.from_dict(ds_dict)
dt.to_netcdf('filepath.nc') (Here if you want groups within groups then the keys in the dictionary should be specified like filepaths, e.g.
Again dt = open_datatree('filepath.nc') To extract all the groups as individual datasets you can do this to recreate the dictionary of datasets: ds_dict = {node.pathstr: node.ds for node in dt.subtree}
Is your solution noticeably faster? We (@jhamman and I) haven't really thought about speed of DataTree I/O yet I don't think, preferring to just make something simple which works for now. The current I/O code for DataTree is here. Despite that project only being a prototype, it is still probably the best solution to your problem that we currently have (at least the neatest). If you are interested in trying it out and reporting any problems then that would be greatly appreciated! EDIT: The idea discussed here might also be of interest to you. |
Thanks for your quick response, Tom! I'm sure that DataTree is a really neat solution for most people working with hierarchically structured data. In my case, we are talking about a very unusual application of the NetCDF4 groups feature: We store literally thousands of very small NetCDF datasets in a single file. A file containing 3000 datasets is typically not larger than 100 MB. With that setup, the I/O performance is critical. Opening and closing the file on each group read/write is very, very bad. On our cluster this means that writing that 100 MB file takes 10 hours with your DataTree implementation, and 30 minutes with my helper functions. For reading, the effect is smaller, but still noticeable. So, my request is really about the I/O performance, and I don't need a full-fledged hierarchical data management API in xarray for that. |
Ah - thanks for the clarification as to the context @tovogt !
That's fair enough.
So are you asking if: EDIT: Tagging @alexamici / @aurghs for their backends expertise + interest in DataTree |
When I first posted this issue, I thought, the best solution is to just implement my proposed helper functions as part of the official xarray API. I don't think our project would add DataTree as a new dependency just for this as long as we have a very easy and viable solution of ourselves. But now I have a new idea. At first, I noticed that xarray/xarray/backends/netCDF4_.py Lines 379 to 381 in 0ffb0f4
That means, that manager already ensures that the same file handle is re-used in subsequent operations of to_netcdf with the same file, unless it's closed in the meantime. Closing is managed here:Lines 1072 to 1094 in 0ffb0f4
It's a bit intransparent, when closing is actually triggered in practice - especially if you only look at the current docstrings. I found that, in fact, setting compute=False in to_netcdf will prevent the closing until you explicitly call compute on the returned object:
for name, ds in zip(ds_names, ds_list):
delayed = ds.to_netcdf(path, group=name, compute=False)
delayed.compute() If this would be communicated more transparently in the docstrings, it would bring us a big step closer to the solution of this issue 🙂 Apart from that, there is only one problem left: Getting a full list of all groups contained in a NetCDF4 file so that we can read them all in. In DataTree, you fall back to using directly the NetCDF4 (or h5netcdf) API for that purpose: |
FYI the plan with DataTree is to eventually integrate the work upstream into xarray, so no new dependency would be required at that point. That might take a while however.
That's good at least! Do you have any suggestions for where the docs should be improved? PRs are of course always welcome too 😁
I agree, and would be open to a function like this (even if eventually DataTree renders it redundant). It's definitely an omission on our part that xarray still doesn't provide an easy way to do this - I've found myself wanting to easily see all the groups multiple times. However, my understanding is that it's slightly tricky to implement, though suggestions/corrections are welcome! |
Is it that difficult to get a list of groups though? I've been testing a backend engine that merges many groups into 1 dataset (dims/coords/variables renamed slightly to avoid duplicate names until they've been interpolated together) using Getting the groups are like the first thing you have to do, the code would look something like this: >>> f = h5py.File('foo.hdf5','w')
>>> f.name
'/'
>>> list(f.keys())
[] https://docs.h5py.org/en/stable/high/group.html Sure, it can be quite tiresome to navigate the backend engines and 3rd party modules in xarray to add this. But most of them uses h5py or something quite similar at its core so it shouldn't be THAT bad. For example one could add another method here that retrieves them in a quick and easy way: xarray/xarray/backends/common.py Lines 356 to 360 in c541237
|
It's not at all tricky to implement the listing of groups in a NETCDF4 file, at least not for the "netcdf4" engine. The code for that is in my OP above: def _xr_nc4_groups_from_store(store):
"""List all groups contained in the given NetCDF4 data store
Parameters
----------
store : xarray.backend.NetCDF4DataStore
Returns
-------
list of str
"""
def iter_groups(ds, prefix=""):
groups = [""]
for group_name, group_ds in ds.groups.items():
groups.extend([f"{prefix}{group_name}{subgroup}"
for subgroup in iter_groups(group_ds, prefix="/")])
return groups
with store._manager.acquire_context(False) as root:
return iter_groups(root) |
Here is my PR for the docstring improvements: #6187 |
Have you seen In principle, it was designed for exactly this sort of thing. |
Thanks for the hint! Unfortunately, it says already in the docstring that "it is no different than calling to_netcdf repeatedly". And I explained in my OP that this would cause repeated file open/close operations - which is the whole point of this issue. Furthermore, when using
But when using However, it might still be the way to go API-wise. So, when talking about the solution of this issue, we could aim at fixing |
FYI I think your
Is it possible for multiple writers to safely write to different groups of a netCDF file at once? This could be done with zarr (definitely with icechunk), in which case this error could be relaxed.
This seems reasonable. |
Is your feature request related to a problem?
I know that there is a big discussion going on in #4118 about organizing hierarchies of datasets within xarray's data structures. But this issue is supposed to address only a comparably simple aspect of this.
Suppose that you have a list
ds_list
ofxarray.Dataset
objects with different dimensions etc. and you want to store them all in one NetCDF4 file by using thegroup
feature introduced in NetCDF4. The group name of each dataset is stored inds_names
. Obviously, you can do something like this:However, this is really slow when you have many (hundreds or thousands of) small datasets because the file is opened and closed in every iteration.
Describe the solution you'd like
I would like to have a function
xr.to_netcdf
that writes a list (or a dictionary) of datasets to a single NetCDF4 file with a single open/close operation. Ideally there should also be a way to read many datasets at once from a single NetCDF4 file usingxr.open_dataset
.Describe alternatives you've considered
Currently, I'm using the following read/write functions to achieve the same:
Additional context
No response
The text was updated successfully, but these errors were encountered: