Skip to content
forked from pydata/xarray

Commit

Permalink
Merge branch 'main' into groupby-shuffle
Browse files Browse the repository at this point in the history
* main:
  Revise (pydata#9366)
  Fix rechunking to a frequency with empty bins. (pydata#9364)
  whats-new entry for dropping python 3.9 (pydata#9359)
  drop support for `python=3.9` (pydata#8937)
  Revise (pydata#9357)
  try to fix scheduled hypothesis test (pydata#9358)
  • Loading branch information
dcherian committed Aug 14, 2024
2 parents fafb937 + abd627a commit 939db9a
Show file tree
Hide file tree
Showing 83 changed files with 310 additions and 327 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/ci-additional.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -139,15 +139,15 @@ jobs:
fail_ci_if_error: false

mypy39:
name: Mypy 3.9
name: Mypy 3.10
runs-on: "ubuntu-latest"
needs: detect-ci-trigger
defaults:
run:
shell: bash -l {0}
env:
CONDA_ENV_FILE: ci/requirements/environment.yml
PYTHON_VERSION: "3.9"
PYTHON_VERSION: "3.10"

steps:
- uses: actions/checkout@v4
Expand Down Expand Up @@ -254,7 +254,7 @@ jobs:
fail_ci_if_error: false

pyright39:
name: Pyright 3.9
name: Pyright 3.10
runs-on: "ubuntu-latest"
needs: detect-ci-trigger
if: |
Expand All @@ -267,7 +267,7 @@ jobs:
shell: bash -l {0}
env:
CONDA_ENV_FILE: ci/requirements/environment.yml
PYTHON_VERSION: "3.9"
PYTHON_VERSION: "3.10"

steps:
- uses: actions/checkout@v4
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,15 +47,15 @@ jobs:
matrix:
os: ["ubuntu-latest", "macos-latest", "windows-latest"]
# Bookend python versions
python-version: ["3.9", "3.12"]
python-version: ["3.10", "3.12"]
env: [""]
include:
# Minimum python version:
- env: "bare-minimum"
python-version: "3.9"
python-version: "3.10"
os: ubuntu-latest
- env: "min-all-deps"
python-version: "3.9"
python-version: "3.10"
os: ubuntu-latest
# Latest python version:
- env: "all-but-dask"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/hypothesis.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
if: |
github.repository == 'pydata/xarray'
&& (github.event_name == 'push' || github.event_name == 'pull_request')
&& (github.event_name == 'push' || github.event_name == 'pull_request' || github.event_name == 'schedule')
outputs:
triggered: ${{ steps.detect-trigger.outputs.trigger-found }}
steps:
Expand Down
2 changes: 1 addition & 1 deletion ci/requirements/bare-minimum.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ channels:
- conda-forge
- nodefaults
dependencies:
- python=3.9
- python=3.10
- coveralls
- pip
- pytest
Expand Down
2 changes: 1 addition & 1 deletion ci/requirements/min-all-deps.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ dependencies:
# Run ci/min_deps_check.py to verify that this file respects the policy.
# When upgrading python, numpy, or pandas, must also change
# doc/user-guide/installing.rst, doc/user-guide/plotting.rst and setup.py.
- python=3.9
- python=3.10
- array-api-strict=1.0 # dependency for testing the array api compat
- boto3=1.26
- bottleneck=1.3
Expand Down
2 changes: 1 addition & 1 deletion doc/getting-started-guide/installing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Installation
Required dependencies
---------------------

- Python (3.9 or later)
- Python (3.10 or later)
- `numpy <https://www.numpy.org/>`__ (1.23 or later)
- `packaging <https://packaging.pypa.io/en/latest/#>`__ (23.1 or later)
- `pandas <https://pandas.pydata.org/>`__ (2.0 or later)
Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -482,7 +482,7 @@ every 2 points along ``x`` dimension,
da.coarsen(time=7, x=2).mean()
:py:meth:`~xarray.DataArray.coarsen` raises an ``ValueError`` if the data
:py:meth:`~xarray.DataArray.coarsen` raises a ``ValueError`` if the data
length is not a multiple of the corresponding window size.
You can choose ``boundary='trim'`` or ``boundary='pad'`` options for trimming
the excess entries or padding ``nan`` to insufficient entries,
Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ is identical to
ds.groupby(x=UniqueGrouper())
; and
and

.. code-block:: python
Expand Down
3 changes: 3 additions & 0 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ New Features

Breaking changes
~~~~~~~~~~~~~~~~
- Support for ``python 3.9`` has been dropped (:pull:`8937`)


Deprecations
Expand All @@ -35,6 +36,8 @@ Deprecations
Bug fixes
~~~~~~~~~

- Fix bug with rechunking to a frequency when some periods contain no data (:issue:`9360`).
By `Deepak Cherian <https://github.com/dcherian>`_.
- Fix bug causing `DataTree.from_dict` to be sensitive to insertion order (:issue:`9276`, :pull:`9292`).
By `Tom Nicholas <https://github.com/TomNicholas>`_.
- Fix resampling error with monthly, quarterly, or yearly frequencies with
Expand Down
6 changes: 3 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ classifiers = [
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
Expand All @@ -20,7 +19,7 @@ dynamic = ["version"]
license = {text = "Apache-2.0"}
name = "xarray"
readme = "README.md"
requires-python = ">=3.9"
requires-python = ">=3.10"

dependencies = [
"numpy>=1.23",
Expand Down Expand Up @@ -242,7 +241,7 @@ extend-exclude = [
"doc",
"_typed_ops.pyi",
]
target-version = "py39"
target-version = "py310"

[tool.ruff.lint]
# E402: module level import not at top of file
Expand All @@ -255,6 +254,7 @@ ignore = [
"E402",
"E501",
"E731",
"UP007"
]
select = [
"F", # Pyflakes
Expand Down
16 changes: 11 additions & 5 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
from __future__ import annotations

import os
from collections.abc import Hashable, Iterable, Mapping, MutableMapping, Sequence
from collections.abc import (
Callable,
Hashable,
Iterable,
Mapping,
MutableMapping,
Sequence,
)
from functools import partial
from io import BytesIO
from numbers import Number
from typing import (
TYPE_CHECKING,
Any,
Callable,
Final,
Literal,
Union,
Expand Down Expand Up @@ -358,7 +364,7 @@ def _dataset_from_backend_dataset(
from_array_kwargs,
**extra_tokens,
):
if not isinstance(chunks, (int, dict)) and chunks not in {None, "auto"}:
if not isinstance(chunks, int | dict) and chunks not in {None, "auto"}:
raise ValueError(
f"chunks must be an int, dict, 'auto', or None. Instead found {chunks}."
)
Expand All @@ -385,7 +391,7 @@ def _dataset_from_backend_dataset(
if "source" not in ds.encoding:
path = getattr(filename_or_obj, "path", filename_or_obj)

if isinstance(path, (str, os.PathLike)):
if isinstance(path, str | os.PathLike):
ds.encoding["source"] = _normalize_path(path)

return ds
Expand Down Expand Up @@ -1042,7 +1048,7 @@ def open_mfdataset(
raise OSError("no files to open")

if combine == "nested":
if isinstance(concat_dim, (str, DataArray)) or concat_dim is None:
if isinstance(concat_dim, str | DataArray) or concat_dim is None:
concat_dim = [concat_dim] # type: ignore[assignment]

# This creates a flat list which is easier to iterate over, whilst
Expand Down
4 changes: 2 additions & 2 deletions xarray/backends/h5netcdf_.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ class H5NetCDFStore(WritableCFDataStore):
def __init__(self, manager, group=None, mode=None, lock=HDF5_LOCK, autoclose=False):
import h5netcdf

if isinstance(manager, (h5netcdf.File, h5netcdf.Group)):
if isinstance(manager, h5netcdf.File | h5netcdf.Group):
if group is None:
root, group = find_root_and_group(manager)
else:
Expand Down Expand Up @@ -374,7 +374,7 @@ def guess_can_open(
if magic_number is not None:
return magic_number.startswith(b"\211HDF\r\n\032\n")

if isinstance(filename_or_obj, (str, os.PathLike)):
if isinstance(filename_or_obj, str | os.PathLike):
_, ext = os.path.splitext(filename_or_obj)
return ext in {".nc", ".nc4", ".cdf"}

Expand Down
4 changes: 2 additions & 2 deletions xarray/backends/lru_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

import threading
from collections import OrderedDict
from collections.abc import Iterator, MutableMapping
from typing import Any, Callable, TypeVar
from collections.abc import Callable, Iterator, MutableMapping
from typing import Any, TypeVar

K = TypeVar("K")
V = TypeVar("V")
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/netCDF4_.py
Original file line number Diff line number Diff line change
Expand Up @@ -615,7 +615,7 @@ def guess_can_open(
# netcdf 3 or HDF5
return magic_number.startswith((b"CDF", b"\211HDF\r\n\032\n"))

if isinstance(filename_or_obj, (str, os.PathLike)):
if isinstance(filename_or_obj, str | os.PathLike):
_, ext = os.path.splitext(filename_or_obj)
return ext in {".nc", ".nc4", ".cdf"}

Expand Down
18 changes: 4 additions & 14 deletions xarray/backends/plugins.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,17 @@
import functools
import inspect
import itertools
import sys
import warnings
from collections.abc import Callable
from importlib.metadata import entry_points
from typing import TYPE_CHECKING, Any, Callable
from typing import TYPE_CHECKING, Any

from xarray.backends.common import BACKEND_ENTRYPOINTS, BackendEntrypoint
from xarray.core.utils import module_available

if TYPE_CHECKING:
import os
from importlib.metadata import EntryPoint

if sys.version_info >= (3, 10):
from importlib.metadata import EntryPoints
else:
EntryPoints = list[EntryPoint]
from importlib.metadata import EntryPoint, EntryPoints
from io import BufferedIOBase

from xarray.backends.common import AbstractDataStore
Expand Down Expand Up @@ -129,13 +124,8 @@ def list_engines() -> dict[str, BackendEntrypoint]:
-----
This function lives in the backends namespace (``engs=xr.backends.list_engines()``).
If available, more information is available about each backend via ``engs["eng_name"]``.
# New selection mechanism introduced with Python 3.10. See GH6514.
"""
if sys.version_info >= (3, 10):
entrypoints = entry_points(group="xarray.backends")
else:
entrypoints = entry_points().get("xarray.backends", [])
entrypoints = entry_points(group="xarray.backends")
return build_engines(entrypoints)


Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/scipy_.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ def guess_can_open(
if magic_number is not None:
return magic_number.startswith(b"CDF")

if isinstance(filename_or_obj, (str, os.PathLike)):
if isinstance(filename_or_obj, str | os.PathLike):
_, ext = os.path.splitext(filename_or_obj)
return ext in {".nc", ".nc4", ".cdf", ".gz"}

Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/zarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -1140,7 +1140,7 @@ def guess_can_open(
self,
filename_or_obj: str | os.PathLike[Any] | BufferedIOBase | AbstractDataStore,
) -> bool:
if isinstance(filename_or_obj, (str, os.PathLike)):
if isinstance(filename_or_obj, str | os.PathLike):
_, ext = os.path.splitext(filename_or_obj)
return ext in {".zarr"}

Expand Down
2 changes: 1 addition & 1 deletion xarray/coding/calendar_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -362,7 +362,7 @@ def interp_calendar(source, target, dim="time"):
"""
from xarray.core.dataarray import DataArray

if isinstance(target, (pd.DatetimeIndex, CFTimeIndex)):
if isinstance(target, pd.DatetimeIndex | CFTimeIndex):
target = DataArray(target, dims=(dim,), name=dim)

if not _contains_datetime_like_objects(
Expand Down
8 changes: 4 additions & 4 deletions xarray/coding/cftime_offsets.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ def _next_higher_resolution(self) -> Tick:
raise ValueError("Could not convert to integer offset at any resolution")

def __mul__(self, other: int | float) -> Tick:
if not isinstance(other, (int, float)):
if not isinstance(other, int | float):
return NotImplemented
if isinstance(other, float):
n = other * self.n
Expand Down Expand Up @@ -805,7 +805,7 @@ def to_cftime_datetime(date_str_or_date, calendar=None):
return date
elif isinstance(date_str_or_date, cftime.datetime):
return date_str_or_date
elif isinstance(date_str_or_date, (datetime, pd.Timestamp)):
elif isinstance(date_str_or_date, datetime | pd.Timestamp):
return cftime.DatetimeProlepticGregorian(*date_str_or_date.timetuple())
else:
raise TypeError(
Expand Down Expand Up @@ -1409,7 +1409,7 @@ def date_range_like(source, calendar, use_cftime=None):
from xarray.coding.frequencies import infer_freq
from xarray.core.dataarray import DataArray

if not isinstance(source, (pd.DatetimeIndex, CFTimeIndex)) and (
if not isinstance(source, pd.DatetimeIndex | CFTimeIndex) and (
isinstance(source, DataArray)
and (source.ndim != 1)
or not _contains_datetime_like_objects(source.variable)
Expand Down Expand Up @@ -1458,7 +1458,7 @@ def date_range_like(source, calendar, use_cftime=None):

# For the cases where the source ends on the end of the month, we expect the same in the new calendar.
if source_end.day == source_end.daysinmonth and isinstance(
freq_as_offset, (YearEnd, QuarterEnd, MonthEnd, Day)
freq_as_offset, YearEnd | QuarterEnd | MonthEnd | Day
):
end = end.replace(day=end.daysinmonth)

Expand Down
2 changes: 1 addition & 1 deletion xarray/coding/cftimeindex.py
Original file line number Diff line number Diff line change
Expand Up @@ -566,7 +566,7 @@ def shift( # type: ignore[override] # freq is typed Any, we are more precise
if isinstance(freq, timedelta):
return self + periods * freq

if isinstance(freq, (str, BaseCFTimeOffset)):
if isinstance(freq, str | BaseCFTimeOffset):
from xarray.coding.cftime_offsets import to_offset

return self + periods * to_offset(freq)
Expand Down
2 changes: 1 addition & 1 deletion xarray/coding/frequencies.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def infer_freq(index):
from xarray.core.dataarray import DataArray
from xarray.core.variable import Variable

if isinstance(index, (DataArray, pd.Series)):
if isinstance(index, DataArray | pd.Series):
if index.ndim != 1:
raise ValueError("'index' must be 1D")
elif not _contains_datetime_like_objects(Variable("dim", index)):
Expand Down
4 changes: 2 additions & 2 deletions xarray/coding/times.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

import re
import warnings
from collections.abc import Hashable
from collections.abc import Callable, Hashable
from datetime import datetime, timedelta
from functools import partial
from typing import Callable, Literal, Union, cast
from typing import Literal, Union, cast

import numpy as np
import pandas as pd
Expand Down
4 changes: 2 additions & 2 deletions xarray/coding/variables.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
from __future__ import annotations

import warnings
from collections.abc import Hashable, MutableMapping
from collections.abc import Callable, Hashable, MutableMapping
from functools import partial
from typing import TYPE_CHECKING, Any, Callable, Union
from typing import TYPE_CHECKING, Any, Union

import numpy as np
import pandas as pd
Expand Down
Loading

0 comments on commit 939db9a

Please sign in to comment.