Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes to the resample docs #3400

Merged
merged 7 commits into from
Oct 16, 2019
Merged

Fixes to the resample docs #3400

merged 7 commits into from
Oct 16, 2019

Conversation

keewis
Copy link
Collaborator

@keewis keewis commented Oct 15, 2019

  • Passes black . && mypy . && flake8
  • Fully documented, including whats-new.rst for all changes and api.rst for new API

The docs of resample fail to mention that resampling only works with datetime-like coords: the only part where one could get this from is the examples (the pandas docs are much more explicit). I'm not sure whether it would be good to also point this out in the function description. Thoughts?

@max-sixty
Copy link
Collaborator

Great, thanks @keewis

I'm not sure whether it would be good to also point this out in the function description. Thoughts?

Yes, I think that would be a welcome addition, if you want to add that too?

@keewis
Copy link
Collaborator Author

keewis commented Oct 15, 2019

I was thinking about something like this:

Handles both downsampling and upsampling. The resampled dimension must be a datetime-like coordinate. If any intervals contain no values from the original object, they will be given the value NaN.

Is there something that works like this, but with non-datetime dimensions?

@dcherian
Copy link
Contributor

there is no label-aware function. coarsen is similar

@keewis
Copy link
Collaborator Author

keewis commented Oct 15, 2019

I think reindex or interp / interpolate_na might help, but to replicate this there is more work to do. I was hoping to link to the docs of a hypothetical other method that does the same for non-datetime coords.

@dcherian
Copy link
Contributor

@crusaderky The minimum versions CI thing is failing:

##[section]Starting: minimum versions policy
==============================================================================
Task         : Bash
Description  : Run a Bash script on macOS, Linux, or Windows
Version      : 3.151.3
Author       : Microsoft Corporation
Help         : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash
==============================================================================
Generating script.
========================== Starting Command Output ===========================
[command]/bin/bash --noprofile --norc /home/vsts/work/_temp/6c2d03fe-5ca8-4c5b-ac96-dfc834f9dd00.sh
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... done

## Package Plan ##

  environment location: /usr/share/miniconda

  added / updated specs:
    - pyyaml


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    ca-certificates-2019.8.28  |                0         132 KB
    certifi-2019.9.11          |           py37_0         154 KB
    conda-4.7.12               |           py37_0         3.0 MB
    openssl-1.1.1d             |       h7b6447c_2         2.5 MB
    pyyaml-5.1.2               |   py37h7b6447c_0         179 KB
    ------------------------------------------------------------
                                           Total:         6.0 MB

The following NEW packages will be INSTALLED:

  pyyaml             pkgs/main/linux-64::pyyaml-5.1.2-py37h7b6447c_0

The following packages will be UPDATED:

  ca-certificates                               2019.5.15-0 --> 2019.8.28-0
  certifi                                  2019.6.16-py37_0 --> 2019.9.11-py37_0
  conda                                       4.7.10-py37_0 --> 4.7.12-py37_0
  openssl                                 1.1.1c-h7b6447c_1 --> 1.1.1d-h7b6447c_2



Downloading and Extracting Packages

openssl-1.1.1d       | 2.5 MB    |            |   0% 
openssl-1.1.1d       | 2.5 MB    | ########## | 100% 

ca-certificates-2019 | 132 KB    |            |   0% 
ca-certificates-2019 | 132 KB    | ########## | 100% 

certifi-2019.9.11    | 154 KB    |            |   0% 
certifi-2019.9.11    | 154 KB    | ########## | 100% 

conda-4.7.12         | 3.0 MB    |            |   0% 
conda-4.7.12         | 3.0 MB    | ########## | 100% 

pyyaml-5.1.2         | 179 KB    |            |   0% 
pyyaml-5.1.2         | 179 KB    | ########## | 100% 
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Analyzing python...
Analyzing numpy...
Analyzing pandas...
Package       Required          Policy            Status
------------- ----------------- ----------------- ------
python        3.6  (2016-12-23) 3.6  (2016-12-23) =
numpy         1.14 (2018-01-09) 1.14 (2018-01-09) =
pandas        0.24 (2019-01-25) 0.24 (2019-01-25) =

# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<

    Traceback (most recent call last):
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/exceptions.py", line 1074, in __call__
        return func(*args, **kwargs)
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/cli/main.py", line 84, in _main
        exit_code = do_call(args, p)
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/cli/conda_argparse.py", line 82, in do_call
        exit_code = getattr(module, func_name)(args, parser)
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/cli/main_search.py", line 73, in execute
        matches = sorted(SubdirData.query_all(spec, channel_urls, subdirs),
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 97, in query_all
        result = tuple(concat(executor.map(subdir_query, channel_urls)))
      File "/usr/share/miniconda/lib/python3.7/concurrent/futures/_base.py", line 586, in result_iterator
        yield fs.pop().result()
      File "/usr/share/miniconda/lib/python3.7/concurrent/futures/_base.py", line 432, in result
        return self.__get_result()
      File "/usr/share/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
        raise self._exception
      File "/usr/share/miniconda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
        result = self.fn(*self.args, **self.kwargs)
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 90, in <lambda>
        package_ref_or_match_spec))
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 110, in query
        if param.match(prec):
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/models/match_spec.py", line 237, in match
        if not self._match_individual(rec, field_name, v):
      File "/usr/share/miniconda/lib/python3.7/site-packages/conda/models/match_spec.py", line 242, in _match_individual
        val = getattr(record, field_name)
    AttributeError: 'str' object has no attribute 'name'

`$ /usr/share/miniconda/bin/conda search numpy --info -c defaults -c conda-forge`

  environment variables:
BUILD_REPOSITORY_LOCALPATH=/home/vsts/work/1/s
                 CIO_TEST=<not set>
                    CONDA=/usr/share/miniconda
               CONDA_ROOT=/usr/share/miniconda
                     PATH=/usr/share/miniconda/bin:/usr/share/rust/.cargo/bin:/usr/local/sbin:/u
                          sr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
       REQUESTS_CA_BUNDLE=<not set>
            SSL_CERT_FILE=<not set>

     active environment : None
       user config file : /home/vsts/.condarc
 populated config files : 
          conda version : 4.7.12
    conda-build version : not installed
         python version : 3.7.3.final.0
       virtual packages : 
       base environment : /usr/share/miniconda  (writable)
           channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/linux-64
                          https://repo.anaconda.com/pkgs/r/noarch
                          https://conda.anaconda.org/conda-forge/linux-64
                          https://conda.anaconda.org/conda-forge/noarch
          package cache : /usr/share/miniconda/pkgs
                          /home/vsts/.conda/pkgs
       envs directories : /usr/share/miniconda/envs
                          /home/vsts/.conda/envs
               platform : linux-64
             user-agent : conda/4.7.12 requests/2.22.0 CPython/3.7.3 Linux/4.15.0-1059-azure ubuntu/16.04.6 glibc/2.23
                UID:GID : 1001:117
             netrc file : None
           offline mode : False


An unexpected error has occurred. Conda has prepared the above report.

Analyzing python...
Analyzing boto3...
Analyzing bottleneck...
Analyzing cartopy...
Analyzing cdms2...
Analyzing cfgrib...
Analyzing cftime...
Analyzing dask...
Analyzing distributed...
Analyzing h5netcdf...
Analyzing h5py...
Analyzing hdf5...
Analyzing iris...
Analyzing lxml...
Analyzing matplotlib...
Analyzing nc-time-axis...
Analyzing netcdf4...
Analyzing numba...
Analyzing numpy...
Analyzing pandas...
Analyzing pseudonetcdf...
Analyzing pydap...
Analyzing pynio...
Analyzing rasterio...
Analyzing scipy...
Analyzing seaborn...
Analyzing toolz...
Analyzing zarr...
Traceback (most recent call last):
  File "ci/min_deps_check.py", line 187, in <module>
    main()
  File "ci/min_deps_check.py", line 175, in main
    rows = [f.result() for f in futures]
  File "ci/min_deps_check.py", line 175, in <listcomp>
    rows = [f.result() for f in futures]
  File "/usr/share/miniconda/lib/python3.7/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/share/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/share/miniconda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "ci/min_deps_check.py", line 127, in process_pkg
    versions = query_conda(pkg)
  File "ci/min_deps_check.py", line 72, in query_conda
    ["conda", "search", pkg, "--info", "-c", "defaults", "-c", "conda-forge"]
  File "/usr/share/miniconda/lib/python3.7/subprocess.py", line 395, in check_output
    **kwargs).stdout
  File "/usr/share/miniconda/lib/python3.7/subprocess.py", line 487, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['conda', 'search', 'numpy', '--info', '-c', 'defaults', '-c', 'conda-forge']' returned non-zero exit status 1.
##[error]Bash exited with code '1'.
##[section]Finishing: minimum versions policy

Copy link
Contributor

@dcherian dcherian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @keewis

@keewis
Copy link
Collaborator Author

keewis commented Oct 16, 2019

I'm not sure why, but this seems to be a random error: with the new commit it passes

@dcherian
Copy link
Contributor

weird. the docstring looks good to me. Feel free to give yourself credit in whats-new I'll merge after that's done.

@keewis
Copy link
Collaborator Author

keewis commented Oct 16, 2019

done

@dcherian
Copy link
Contributor

Thanks!

@dcherian dcherian merged commit 1f81338 into pydata:master Oct 16, 2019
@keewis keewis deleted the fix-resample-docs branch October 16, 2019 18:58
@crusaderky
Copy link
Contributor

@dcherian that's not my script's fault. It's the bash command conda search numpy --info -c defaults -c conda-forge.
I've never seen that error... let me know if it happens again.

@max-sixty
Copy link
Collaborator

Thanks @keewis !

dcherian added a commit to dcherian/xarray that referenced this pull request Oct 21, 2019
* upstream/master:
  Whatsnew for pydata#3419 (pydata#3422)
  Revert changes made in pydata#3358 (pydata#3411)
  Python3.6 idioms (pydata#3419)
  Temporarily mark pseudonetcdf-3.1 as incompatible (pydata#3420)
  Fix and add test for groupby_bins() isnan TypeError. (pydata#3405)
  Update where docstring to make return value type more clear (pydata#3408)
  tests for arrays with units (pydata#3238)
  Fixes to the resample docs (pydata#3400)
dcherian added a commit to dcherian/xarray that referenced this pull request Oct 21, 2019
…e-multiple-dims

* upstream/master:
  Whatsnew for pydata#3419 (pydata#3422)
  Revert changes made in pydata#3358 (pydata#3411)
  Python3.6 idioms (pydata#3419)
  Temporarily mark pseudonetcdf-3.1 as incompatible (pydata#3420)
  Fix and add test for groupby_bins() isnan TypeError. (pydata#3405)
  Update where docstring to make return value type more clear (pydata#3408)
  tests for arrays with units (pydata#3238)
  Fixes to the resample docs (pydata#3400)
dcherian added a commit to dcherian/xarray that referenced this pull request Oct 21, 2019
* upstream/master:
  Whatsnew for pydata#3419 (pydata#3422)
  Revert changes made in pydata#3358 (pydata#3411)
  Python3.6 idioms (pydata#3419)
  Temporarily mark pseudonetcdf-3.1 as incompatible (pydata#3420)
  Fix and add test for groupby_bins() isnan TypeError. (pydata#3405)
  Update where docstring to make return value type more clear (pydata#3408)
  tests for arrays with units (pydata#3238)
  Fixes to the resample docs (pydata#3400)
dcherian added a commit to dcherian/xarray that referenced this pull request Oct 22, 2019
commit cfe87e0
Merge: 1c751a6 1f81338
Author: Deepak Cherian <[email protected]>
Date:   Thu Oct 17 01:13:49 2019 +0000

    Merge branch 'master' into fix/groupby-nan

commit 1c751a6
Author: dcherian <[email protected]>
Date:   Wed Oct 16 19:09:19 2019 -0600

    whats-new

commit 71df146
Author: dcherian <[email protected]>
Date:   Wed Oct 16 19:03:22 2019 -0600

    Add NaTs

commit 1f81338
Author: keewis <[email protected]>
Date:   Wed Oct 16 20:54:27 2019 +0200

    Fixes to the resample docs (pydata#3400)

    * add a missing newline to make sphinx detect the code block

    * update the link to the pandas documentation

    * explicitly state that this only works with datetime dimensions

    * also put the datetime dim requirement into the function description

    * add Series.resample and DataFrame.resample as reference

    * add the changes to whats-new.rst

    * move references to the bottom of the docstring

commit 5bf94a8
Author: dcherian <[email protected]>
Date:   Mon Oct 14 13:25:32 2019 -0600

    Drop nans in grouped variable.
dcherian added a commit that referenced this pull request Oct 22, 2019
commit b0c336f
Author: Maximilian Roos <[email protected]>
Date:   Mon Oct 21 04:52:36 2019 -0400

    Whatsnew for #3419 (#3422)

    * pyupgrade --py36-plus

    * Update xarray/core/nputils.py

    Co-Authored-By: crusaderky <[email protected]>

    * Update xarray/core/parallel.py

    Co-Authored-By: crusaderky <[email protected]>

    * Update xarray/tests/test_cftime_offsets.py

    Co-Authored-By: crusaderky <[email protected]>

    * Update xarray/tests/test_cftime_offsets.py

    Co-Authored-By: crusaderky <[email protected]>

    * whatsnew

commit 2984415
Author: Rhys Doyle <[email protected]>
Date:   Mon Oct 21 01:17:47 2019 +0100

    Revert changes made in #3358 (#3411)

    * Revert #3358

    * Revision

    * Code review

    * Merge from master

    * Obsolescence note

commit 3c462b9
Author: Maximilian Roos <[email protected]>
Date:   Sun Oct 20 20:16:58 2019 -0400

    Python3.6 idioms (#3419)

    * pyupgrade --py36-plus

    * Update xarray/core/nputils.py

    Co-Authored-By: crusaderky <[email protected]>

    * Update xarray/core/parallel.py

    Co-Authored-By: crusaderky <[email protected]>

    * Update xarray/tests/test_cftime_offsets.py

    Co-Authored-By: crusaderky <[email protected]>

    * Update xarray/tests/test_cftime_offsets.py

    Co-Authored-By: crusaderky <[email protected]>

commit 9886e3c
Author: crusaderky <[email protected]>
Date:   Sun Oct 20 23:42:36 2019 +0100

    Temporarily mark pseudonetcdf-3.1 as incompatible (#3420)

commit 0f7ab0e
Author: Dan Nowacki <[email protected]>
Date:   Thu Oct 17 14:13:44 2019 -0700

    Fix and add test for groupby_bins() isnan TypeError. (#3405)

    * Fix and add test for groupby_bins() isnan TypeError.

    * Better testing

    * black

commit 6cd50cc
Author: pmallas <[email protected]>
Date:   Thu Oct 17 10:15:47 2019 -0400

    Update where docstring to make return value type more clear (#3408)

    * Update where docstring to make return value type more clear

    The where docstring was a little confusing to me.  I misunderstood "Same type as caller' to mean the values in the xarray not the xarray itself.  I think this small change will clean it up for most users.  Thanks.

    * Update xarray/core/common.py

    Co-Authored-By: Maximilian Roos <[email protected]>

commit 55b1ac0
Author: keewis <[email protected]>
Date:   Thu Oct 17 05:13:39 2019 +0200

    tests for arrays with units (#3238)

    * create the empty test file

    * add tests for data array aggregation functions

    * include pint in the ci

    * ignore missing type annotations for pint

    * really skip the tests if pint is not available

    * remove the reason from the importorskip call

    * test that the dataarray constructor does not strip the unit

    * convert every unit stripped warning to an error

    * work around pint not implementing np.allclose yet

    * remove the now unnecessary filterwarnings decorator

    * xfail all tests that depend on pint having a __array_function__

    * treat nans as equal

    * implement tests for simple arithmetic operations

    * use param's id argument to assign readable names

    * add tests for sel() and isel()

    * add more readable names for the unary arithmetics

    * xfail every test that is not yet xfailing

    These don't pass because the constructor raises a unit stripped
    warning - fixed in pint#764.

    * only xfail if pint is not the current development version

    This is test is not really reliable, but sufficient for now.

    * always use lists instead of tuples for indexing

    * add tests for loc and squeeze

    * black

    * add names and xfail marks to the parameters

    * add tests for interp and interp_like

    * implement tests for reindex

    * remove the xfail marks where it is not clear yet that pint is to blame

    * add tests for reindex_like

    * don't pass the new DataArray to a kwarg

    * xfail if not on pint dev

    * refactor the tests

    * add tests for univariate and bivariate ufuncs

    * black

    * xfail aggregation only if pint does not implement __array_function__ yet

    * remove the global filterwarnings mark

    apparently, this caused the tests to change behavior, resulting in
    different errors, or causing tests to pass that should actually fail.

    * add a test case for the repr

    * create a pytest mark that explicitly requires pint's __array_function__

    * also check the string representation in addition to the repr

    * add helpers for creating method tests

    * check that simple aggregation methods work

    * use format() instead of format strings

    * make sure the repr of method calls is different from functions

    * merge the two aggregation tests

    * explicitly check whether pint supports __array_function__

    relying on versions is somewhat fragile.

    * provide a fallback for the new base quantity

    * check that no warning is raised for both with and without coords

    * also check that the repr works both with and without coords

    * wrap all aggregation function calls

    * xfail every call that fails because of something outside xarray

    * xfail tests related to dimension coordinates and indexes

    * use the dimensions from the original array

    * allow passing arguments to the method on call

    * add tests for comparisons

    * add tests for detecting, filling and dropping missing values

    * mark the missing value tests as requiring pint to support duck arrays

    * add tests for isin, where and interpolate_na

    * reformat unit ids and add a test parameter for compatible units

    * remove an unnecessary xfail

    * add tests for the top-level replication functions (*_like)

    * check for whatever pint does with *_like functions

    * add tests for combine_first

    * xfail the bivariate ufunc tests

    * xfail the call to np.median

    * move the top-level function tests out of the DataArray namespace class

    * add cumsum and cumprod to the list of aggregation functions

    * add tests for the numpy methods

    * check for equal units directly after checking the magnitude

    * add tests for content manipulation methods

    * add tests for comparing DataArrays (equals, indentical)

    * add a test for broadcast_equals

    * refactor the comparison operation tests

    * rewrite the strip, attach and assert_equal functions and add extract

    * preserve multiindex in strip and attach

    * attach the unit from element "data" as fallback

    * fix some small typos

    * compare QuantityScalar and QuantitySequence based on their values

    * make the isel test more robust

    * add tests for reshaping and reordering

    * unify the structure of the tests

    * mark the remaining tests as requiring a recent pint version, too

    * explicitly handle quantities as parameters

    * change the repr of the function / method wrappers

    * check whether __init__ and repr / str handle units in data and coords

    * generalize array_attach_units

    * move the redefinition of DimensionalityError

    * identify quantities using isinstance

    * typo

    * skip tests with a pint version without __array_function__

    * compare DataArrays where possible

    * mark only the compatible unit as xfailing

    * preserve the name of data arrays

    * also attach units to x_mm

    * Test in more CI environments; documentation

    * What's New

    * remove a stale function

    * use Quantity directly for instance tests

    * explicitly set roll_coords to silence a deprecation warning

    * skip the whole module if pint does not implement __array_function__

    the advantage is that now forgetting to decorate a test case is not possible.

    * allow to attach units using the mapping from extract_units

    * add tests for computation methods

    resampling fails until I figure out how to use it with non-datetime coords.

    * add tests for grouped operations

    * add a test for rolling_exp

    * add a todo note for the module level skip on __array_function__

    * add a test for dot

    * use attach_units instead of manually attaching

    * modify the resample test to actually work

    * add a test for to_unstacked_dataset

    * update whats-new.rst and installing.rst

    * reformat the whats-new.rst entry

    * What's New

commit 1f81338
Author: keewis <[email protected]>
Date:   Wed Oct 16 20:54:27 2019 +0200

    Fixes to the resample docs (#3400)

    * add a missing newline to make sphinx detect the code block

    * update the link to the pandas documentation

    * explicitly state that this only works with datetime dimensions

    * also put the datetime dim requirement into the function description

    * add Series.resample and DataFrame.resample as reference

    * add the changes to whats-new.rst

    * move references to the bottom of the docstring

commit 3f9069b
Author: Joseph Hamman <[email protected]>
Date:   Mon Oct 14 14:22:08 2019 -0700

    Revert to dev version

commit 62943e2
Author: Joseph Hamman <[email protected]>
Date:   Mon Oct 14 12:51:46 2019 -0700

    Release v0.14.0

commit 30472ec
Author: Joseph Hamman <[email protected]>
Date:   Mon Oct 14 12:24:05 2019 -0700

    updates for 0.14 release [black only]

commit 4519843
Author: Joseph Hamman <[email protected]>
Date:   Mon Oct 14 12:18:54 2019 -0700

    updates for 0.14 release

commit 4f5ca73
Author: Deepak Cherian <[email protected]>
Date:   Mon Oct 14 18:06:53 2019 +0000

    Make concat more forgiving with variables that are being merged. (#3364)

    * Make concat more forgiving with variables that are being merged.

    * rename test.

    * simplify test.

    * make diff smaller.

commit ae1d8c7
Author: Crypto Jerônimo <[email protected]>
Date:   Sun Oct 13 15:38:36 2019 +0100

    Fix documentation typos (#3396)

commit 863e490
Author: Joe Hamman <[email protected]>
Date:   Sat Oct 12 17:33:33 2019 -0400

    OrderedDict --> dict, some python3.5 cleanup too (#3389)

    * OrderedDict --> dict, some python3.5 cleanup too

    * respond to part of @shoyer's review

    * fix set attr syntax on netcdf4 vars

    * fix typing errors

    * update whats new and todo comments

    * Typing annotations

    * Typing annotations

    * Fix regression

    * More substantial changes

    * More polish

    * Typing annotations

    * Rerun notebooks

commit 6851e3e
Author: crusaderky <[email protected]>
Date:   Sat Oct 12 21:05:32 2019 +0100

    Annotate LRUCache (#3395)

commit 4c05d38
Author: Stephan Hoyer <[email protected]>
Date:   Fri Oct 11 08:47:57 2019 -0700

    BUG: overrides to a dimension coordinate do not get aligned (#3393)

    Fixes GH3377

commit 3f29551
Author: Deepak Cherian <[email protected]>
Date:   Thu Oct 10 23:44:19 2019 +0000

    map_blocks (#3276)

    * map_block attempt 2

    * Address reviews: errors, args + kwargs support.

    * Works with datasets!

    * remove wrong comment.

    * Support chunks.

    * infer template.

    * cleanup

    * cleanup2

    * api.rst

    * simple shape change error check.

    * Make test more complicated.

    * Fix for when user function doesn't set DataArray.name

    * Now _to_temp_dataset works.

    * Add whats-new

    * chunks kwarg makes no sense right now.

    * review feedback:

    1. skip index graph nodes.
    2. var → name
    3. quicker dataarray creation.
    4. Add restrictions to docstring.
    5. rename chunk construction task.
    6. error when non-xarray object is returned.
    7. restore non-coord dims.

    review

    * Support nondim coords in make_meta.

    * Add Dataset.unify_chunks

    * doc updates.

    * minor.

    * update comment.

    * More complicated test dataset. Tests fail :X

    * Don't know why compute is needed.

    * work with DataArray nondim coords.

    * fastpath unify_chunks

    * comment.

    * much improved tests.

    * Change args, kwargs syntax.

    * Add dataset, dataarray methods.

    * api.rst

    * docstrings.

    * Fix unify_chunks.

    * Move assert_chunks_equal to xarray.testing.

    * minor changes.

    * Better error handling when inferring returned object

    * wip

    * wip

    * better to_array

    * Docstrings + nicer error message.

    * remove unify_chunks in map_blocks + better tests.

    * typing for unify_chunks

    * address more review comments.

    * more unify_chunks tests.

    * Just use dask.core.utils.meta_from_array

    * get tests working. assert_equal needs a lot of fixing.

    * more unify_chunks test.

    * assert_chunks_equal fixes.

    * copy over meta_from_array.

    * minor fixes.

    * raise chunks error earlier and test for map_blocks raising chunk error

    * fix.

    * Type annotations

    * py35 compat

    * make sure unify_chunks does not compute.

    * Make tests functional by call compute before assert_equal

    * Update whats-new

    * Work with attributes.

    * Support attrs and name changes.

    * more assert_equal

    * test changing coord attribute

    * fix whats new

    * rework tests to use fixtures (kind of)

    * more review changes.

    * cleanup

    * more review feedback.

    * fix unify_chunks.

    * read dask_array_compat :)

    * Dask 1.2.0 compat.

    * documentation polish

    * make_meta reflow

    * cosmetic

    * polish

    * Fix tests

    * isort

    * isort

    * Add func call to docstrings.

commit 291cb80
Author: Deepak Cherian <[email protected]>
Date:   Thu Oct 10 18:23:20 2019 +0000

    Add groupby.dims & Fix groupby reduce for DataArray (#3338)

    * Fix groupby reduce for DataArray

    * bugfix.

    * another bugfix.

    * bugfix unique_and_monotonic for object indexes (uniqueness is enough)

    * Add groupby.dims property.

    * update reduce docstring to point to xarray.ALL_DIMS

    * test for object index dims.

    * test reduce dimensions error.

    * Add whats-new

    * fix docs build

    * sq whats-new

    * one more test.

    * fix test.

    * undo monotonic change.

    * Add dimensions to repr.

    * Raise error if no bins.

    * Raise nice error if no groups were formed.

    * Some more error raising and testing.

    * Add dataset tests.

    * update whats-new.

    * fix tests.

    * make dims a cached lazy property.

    * fix whats-new.

    * whitespace

    * fix whats-new

commit 3f0049f
Author: crusaderky <[email protected]>
Date:   Wed Oct 9 19:01:29 2019 +0100

    Speed up isel and __getitem__ (#3375)

    * Variable.isel cleanup/speedup

    * Dataset.isel code cleanup

    * Speed up isel

    * What's New

    * Better error checks

    * Speedup

    * type annotations

    * Update doc/whats-new.rst

    Co-Authored-By: Maximilian Roos <[email protected]>

    * What's New

    * What's New

    * Always shallow-copy variables

commit 132733a
Author: Deepak Cherian <[email protected]>
Date:   Tue Oct 8 22:13:47 2019 +0000

    Fix concat bug when concatenating unlabeled dimensions. (#3362)

    * Fix concat bug when concatenating unlabeled dimensions.

    * Add whats-new

    * Add back older test.

    * fix test

    * Revert "fix test"

    This reverts commit c33ca34.

    * better fix

commit 6fb272c
Author: crusaderky <[email protected]>
Date:   Tue Oct 8 22:23:46 2019 +0100

    Rolling minimum dependency versions policy (#3358)

    * - Downgrade numpy to 1.14, pandas to 0.20, scipy to 0.19 (24 months old)
    - Downgrade dask to 1.1 (6 months old)
    - Don't pin patch versions

    * Apply rolling policy (see #3222)

    * Automated tool to verify the minimum versions

    * Drop Python 3.5

    * lint

    * Trivial cosmetic

    * Cosmetic

    * (temp) debug CI failure

    * Parallelize versions check script

    * Remove hacks for legacy dask

    * Documentation

    * Assorted cleanup

    * Assorted cleanup

    * Fix regression

    * Cleanup

    * type annotations upgraded to Python 3.6

    * count_not_none backport

    * pd.Index.equals on legacy pandas returned False when comparing vs. a ndarray

    * Documentation

    * pathlib cleanup

    * Slide deprecations from 0.14 to 0.15

    * More cleanups

    * More cleanups

    * Fix min_deps_check

    * Fix min_deps_check

    * Set policy of 12 months for pandas and scipy

    * Cleanup

    * Cleanup

    * Sphinx fix

    * Overhaul readthedocs environment

    * Fix test crash

    * Fix test crash

    * Prune readthedocs environment

    * Cleanup

    * Hack around versioneer bug on readthedocs CI

    * Code review

    * Prevent random timeouts in the readthedocs CI

    * What's New polish

    * Merge from Master

    * Trivial cosmetic

    * Reimplement pandas.core.common.count_not_none

commit 3e2a754
Author: Alan D. Snow <[email protected]>
Date:   Tue Oct 8 09:36:52 2019 -0500

    added geocube and rioxarray to related projects (#3383)

commit 4254b4a
Author: crusaderky <[email protected]>
Date:   Fri Oct 4 23:17:56 2019 +0100

    Lint (#3373)

    * raise exception instance, not class

    * isort

    * isort

    * Bump mypy version

commit 283b4fe
Author: Deepak Cherian <[email protected]>
Date:   Fri Oct 4 17:04:36 2019 +0000

    Docs/more fixes (#2934)

    * Move netcdf to beginning of io.rst

    * Better indexing example.

    * Start de-emphasizing pandas

    * misc.

    * compute, load, persist docstrings + text.

    * split-apply-combine.

    * np.newaxis.

    * misc.

    * some dask stuff.

    * Little more dask.

    * undo index.rst changes.

    * link to dask docs on chunks

    * Fix io.rst.

    * small changes.

    * rollingupdate.

    * joe's review

commit dfdeef7
Author: Stephan Hoyer <[email protected]>
Date:   Thu Oct 3 21:42:50 2019 -0700

    Explicitly keep track of indexes with merging (#3234)

    * Explicitly keep track of indexes in merge.py

    * Typing fixes

    * More tying fixes

    * more typing fixes

    * fixup

commit 86fb71d
Author: Deepak Cherian <[email protected]>
Date:   Thu Oct 3 15:41:50 2019 +0000

    groupby repr (#3344)

    * groupby repr

    * add test.

    * test datetime and nondim coord

    * rename test_da → repr_da

    * Add whats-new

    * Update doc/whats-new.rst

commit dd2b803
Author: Ryan May <[email protected]>
Date:   Wed Oct 2 15:43:44 2019 -0600

    Remove setting of universal wheels (#3367)

    Universal wheels indicate that one wheel supports Python 2 and 3. This
    is no longer the case for xarray. This causes builds to generate files
    with names like xarray-0.13.0-py2.py3-none-any.whl, which can cause
    pip to incorrectly install the wheel when installing from a list of
    wheel files.

commit 21705e6
Author: crusaderky <[email protected]>
Date:   Tue Oct 1 19:13:55 2019 +0100

    Revisit # noqa annotations (#3359)

commit fb575eb
Author: crusaderky <[email protected]>
Date:   Tue Oct 1 15:11:21 2019 +0100

    Fix codecov.io upload on Windows (#3360)

commit 1ab2279
Author: Deepak Cherian <[email protected]>
Date:   Mon Sep 30 21:12:22 2019 +0000

    Add how do I ... section (#3357)

    * Add how do I ... section

    * Bbugfix.

    * Update doc/howdoi.rst

    Co-Authored-By: Maximilian Roos <[email protected]>

    * Update doc/howdoi.rst

    Co-Authored-By: Maximilian Roos <[email protected]>

    * small updates.

    * Add more.

commit bd1069b
Author: Gregory Gundersen <[email protected]>
Date:   Sun Sep 29 19:39:53 2019 -0400

    Add glossary to documentation (#3352)

    * First draft at terminology glossary.

    * Made name matching rules more explicit and hopefully clearer.

    * Amended what's new.

    * Changes based on feedback.

    * More changed based on feedback.

commit b51683f
Author: Anderson Banihirwe <[email protected]>
Date:   Sun Sep 29 07:50:21 2019 -0600

    Documentation improvements (#3328)

    * Add examples for full_like, zeros_like, ones_like

    * Add examples for xr.align

    * Add examples for xr.merge

    * Update xr.where docstring

    * Update xr.dot docstring

    * Update xarray/core/common.py

    Co-Authored-By: Deepak Cherian <[email protected]>

    * Update xarray/core/common.py

    Co-Authored-By: Deepak Cherian <[email protected]>

    * Update xr.combine_by_coords docstring

    * Apply black formatting only

    * More black formatting

    * Remove unnecessary pandas bits

    * Fix indentation issues

    * Update assign and pipe

    * Update `Dataset.reindex` with examples

    * Update `Dataset.fillna` with examples

    * Address styling issues

    * Update docstring

    Co-Authored-By: Deepak Cherian <[email protected]>

commit f3c7da6
Author: Gregory Gundersen <[email protected]>
Date:   Sat Sep 28 15:57:36 2019 -0400

    Remove `complex.nc` from built docs (#3353)

    * Rolling back to prevent a different issue from leaking into this one.

    * Amended what's new.

commit 6ece6a1
Author: Tony Tung <[email protected]>
Date:   Thu Sep 26 22:45:26 2019 -0700

    Fix DataArray.to_netcdf type annotation (#3325)

    It calls DataSet.to_netcdf, which returns Union[bytes, "Delayed", None].  So this should as well.

commit 16fdac9
Author: crusaderky <[email protected]>
Date:   Thu Sep 26 10:38:46 2019 +0100

    CI test suites with pinned minimum dependencies (#3346)

    * CI test suites with pinned minimum dependencies

    * code review

    * Clarity re lxml

commit ea101f5
Author: Tom Nicholas <[email protected]>
Date:   Thu Sep 26 10:51:58 2019 +0200

    Bugfix/plot accept coord dim (#3345)

    Bug in plot.line fixed by ensuring 1D coords are cast down to their associated dims. There was previously two particular cases where this would not happen.

commit 85c9a40
Author: crusaderky <[email protected]>
Date:   Wed Sep 25 02:40:54 2019 +0100

    CI environments overhaul (#3340)

    * Rationalize and align CI environments. Add many optional dependencies to individual CI suites.

    * pynio and cdms2 are not available on Windows

    * cfgrib causes Python interpreter crash on Windows

    * dtype of np.arange defaults to int64 on Linux and int32 on Windows

    * Suppress failure to delete file on Windows

    * Mark hypotesis tests as @slow
dcherian added a commit to dcherian/xarray that referenced this pull request Oct 22, 2019
* upstream/master:
  Whatsnew for pydata#3419 (pydata#3422)
  Revert changes made in pydata#3358 (pydata#3411)
  Python3.6 idioms (pydata#3419)
  Temporarily mark pseudonetcdf-3.1 as incompatible (pydata#3420)
  Fix and add test for groupby_bins() isnan TypeError. (pydata#3405)
  Update where docstring to make return value type more clear (pydata#3408)
  tests for arrays with units (pydata#3238)
  Fixes to the resample docs (pydata#3400)
dcherian added a commit to dcherian/xarray that referenced this pull request Oct 24, 2019
commit cfe87e0
Merge: 1c751a6 1f81338
Author: Deepak Cherian <[email protected]>
Date:   Thu Oct 17 01:13:49 2019 +0000

    Merge branch 'master' into fix/groupby-nan

commit 1c751a6
Author: dcherian <[email protected]>
Date:   Wed Oct 16 19:09:19 2019 -0600

    whats-new

commit 71df146
Author: dcherian <[email protected]>
Date:   Wed Oct 16 19:03:22 2019 -0600

    Add NaTs

commit 1f81338
Author: keewis <[email protected]>
Date:   Wed Oct 16 20:54:27 2019 +0200

    Fixes to the resample docs (pydata#3400)

    * add a missing newline to make sphinx detect the code block

    * update the link to the pandas documentation

    * explicitly state that this only works with datetime dimensions

    * also put the datetime dim requirement into the function description

    * add Series.resample and DataFrame.resample as reference

    * add the changes to whats-new.rst

    * move references to the bottom of the docstring

commit 5bf94a8
Author: dcherian <[email protected]>
Date:   Mon Oct 14 13:25:32 2019 -0600

    Drop nans in grouped variable.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants