Skip to content

Commit

Permalink
Merge branch 'main' into fix-doc-series-compare
Browse files Browse the repository at this point in the history
  • Loading branch information
mroeschke authored Sep 30, 2024
2 parents 038bb12 + 00855f8 commit 4c83b37
Show file tree
Hide file tree
Showing 15 changed files with 584 additions and 221 deletions.
2 changes: 2 additions & 0 deletions .github/actions/setup-conda/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ runs:
- name: Install ${{ inputs.environment-file }}
uses: mamba-org/setup-micromamba@v1
with:
# Pinning to avoid 2.0 failures
micromamba-version: '1.5.10-0'
environment-file: ${{ inputs.environment-file }}
environment-name: test
condarc-file: ci/.condarc
Expand Down
13 changes: 0 additions & 13 deletions ci/code_checks.sh
Original file line number Diff line number Diff line change
Expand Up @@ -97,25 +97,18 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.Series.dt.unit GL08" \
-i "pandas.Series.pad PR01,SA01" \
-i "pandas.Series.sparse.from_coo PR07,SA01" \
-i "pandas.Series.sparse.npoints SA01" \
-i "pandas.Timedelta.max PR02" \
-i "pandas.Timedelta.min PR02" \
-i "pandas.Timedelta.resolution PR02" \
-i "pandas.TimedeltaIndex.to_pytimedelta RT03,SA01" \
-i "pandas.Timestamp.max PR02" \
-i "pandas.Timestamp.min PR02" \
-i "pandas.Timestamp.nanosecond GL08" \
-i "pandas.Timestamp.resolution PR02" \
-i "pandas.Timestamp.tzinfo GL08" \
-i "pandas.Timestamp.year GL08" \
-i "pandas.api.types.is_dict_like PR07,SA01" \
-i "pandas.api.types.is_file_like PR07,SA01" \
-i "pandas.api.types.is_float PR01,SA01" \
-i "pandas.api.types.is_hashable PR01,RT03,SA01" \
-i "pandas.api.types.is_integer PR01,SA01" \
-i "pandas.api.types.is_iterator PR07,SA01" \
-i "pandas.api.types.is_named_tuple PR07,SA01" \
-i "pandas.api.types.is_re PR07,SA01" \
-i "pandas.api.types.is_re_compilable PR07,SA01" \
-i "pandas.api.types.pandas_dtype PR07,RT03,SA01" \
-i "pandas.arrays.ArrowExtensionArray PR07,SA01" \
Expand All @@ -128,8 +121,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.arrays.SparseArray PR07,SA01" \
-i "pandas.arrays.TimedeltaArray PR07,SA01" \
-i "pandas.core.groupby.DataFrameGroupBy.__iter__ RT03,SA01" \
-i "pandas.core.groupby.DataFrameGroupBy.agg RT03" \
-i "pandas.core.groupby.DataFrameGroupBy.aggregate RT03" \
-i "pandas.core.groupby.DataFrameGroupBy.boxplot PR07,RT03,SA01" \
-i "pandas.core.groupby.DataFrameGroupBy.get_group RT03,SA01" \
-i "pandas.core.groupby.DataFrameGroupBy.groups SA01" \
Expand All @@ -140,8 +131,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.core.groupby.DataFrameGroupBy.plot PR02" \
-i "pandas.core.groupby.DataFrameGroupBy.sem SA01" \
-i "pandas.core.groupby.SeriesGroupBy.__iter__ RT03,SA01" \
-i "pandas.core.groupby.SeriesGroupBy.agg RT03" \
-i "pandas.core.groupby.SeriesGroupBy.aggregate RT03" \
-i "pandas.core.groupby.SeriesGroupBy.get_group RT03,SA01" \
-i "pandas.core.groupby.SeriesGroupBy.groups SA01" \
-i "pandas.core.groupby.SeriesGroupBy.indices SA01" \
Expand Down Expand Up @@ -170,7 +159,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.errors.CSSWarning SA01" \
-i "pandas.errors.CategoricalConversionWarning SA01" \
-i "pandas.errors.ChainedAssignmentError SA01" \
-i "pandas.errors.ClosedFileError SA01" \
-i "pandas.errors.DataError SA01" \
-i "pandas.errors.DuplicateLabelError SA01" \
-i "pandas.errors.IntCastingNaNError SA01" \
Expand All @@ -180,7 +168,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.errors.NumExprClobberingError SA01" \
-i "pandas.errors.NumbaUtilError SA01" \
-i "pandas.errors.OptionError SA01" \
-i "pandas.errors.OutOfBoundsDatetime SA01" \
-i "pandas.errors.OutOfBoundsTimedelta SA01" \
-i "pandas.errors.PerformanceWarning SA01" \
-i "pandas.errors.PossibleDataLossError SA01" \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ Add the parameters' full description and name, provided by the parameters metada
Compared to the previous example, there is no common column name.
However, the ``parameter`` column in the ``air_quality`` table and the
``id`` column in the ``air_quality_parameters_name`` both provide the
``id`` column in the ``air_quality_parameters`` table both provide the
measured variable in a common format. The ``left_on`` and ``right_on``
arguments are used here (instead of just ``on``) to make the link
between the two tables.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ See the indexing documentation :ref:`Indexing and Selecting Data <indexing>` and
Getitem (``[]``)
~~~~~~~~~~~~~~~~

For a :class:`DataFrame`, passing a single label selects a columns and
For a :class:`DataFrame`, passing a single label selects a column and
yields a :class:`Series` equivalent to ``df.A``:

.. ipython:: python
Expand Down
2 changes: 2 additions & 0 deletions pandas/_libs/tslibs/nattype.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -500,6 +500,8 @@ class NaTType(_NaT):
--------
to_timedelta : Convert argument to timedelta.
Timedelta : Represents a duration, the difference between two dates or times.
Timedelta.seconds : Returns the seconds component of the timedelta.
Timedelta.microseconds : Returns the microseconds component of the timedelta.
Examples
--------
Expand Down
9 changes: 9 additions & 0 deletions pandas/_libs/tslibs/np_datetime.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,15 @@ class OutOfBoundsDatetime(ValueError):
"""
Raised when the datetime is outside the range that can be represented.
This error occurs when attempting to convert or parse a datetime value
that exceeds the bounds supported by pandas' internal datetime
representation.
See Also
--------
to_datetime : Convert argument to datetime.
Timestamp : Pandas replacement for python ``datetime.datetime`` object.
Examples
--------
>>> pd.to_datetime("08335394550")
Expand Down
3 changes: 3 additions & 0 deletions pandas/_libs/tslibs/timedeltas.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -1196,6 +1196,8 @@ cdef class _Timedelta(timedelta):
--------
to_timedelta : Convert argument to timedelta.
Timedelta : Represents a duration, the difference between two dates or times.
Timedelta.seconds : Returns the seconds component of the timedelta.
Timedelta.microseconds : Returns the microseconds component of the timedelta.

Examples
--------
Expand Down Expand Up @@ -1493,6 +1495,7 @@ cdef class _Timedelta(timedelta):
See Also
--------
Timedelta.asm8 : Return a numpy timedelta64 array scalar view.
numpy.ndarray.view : Returns a view of an array with the same data.
Timedelta.to_numpy : Converts the Timedelta to a NumPy timedelta64.
Timedelta.total_seconds : Returns the total duration of the Timedelta
Expand Down
12 changes: 12 additions & 0 deletions pandas/core/arrays/sparse/array.py
Original file line number Diff line number Diff line change
Expand Up @@ -708,6 +708,18 @@ def npoints(self) -> int:
"""
The number of non- ``fill_value`` points.
This property returns the number of elements in the sparse series that are
not equal to the ``fill_value``. Sparse data structures store only the
non-``fill_value`` elements, reducing memory usage when the majority of
values are the same.
See Also
--------
Series.sparse.to_dense : Convert a Series from sparse values to dense.
Series.sparse.fill_value : Elements in ``data`` that are ``fill_value`` are
not stored.
Series.sparse.density : The percent of non- ``fill_value`` points, as decimal.
Examples
--------
>>> from pandas.arrays import SparseArray
Expand Down
21 changes: 21 additions & 0 deletions pandas/core/arrays/timedeltas.py
Original file line number Diff line number Diff line change
Expand Up @@ -790,6 +790,19 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
Returns
-------
numpy.ndarray
A NumPy ``timedelta64`` object representing the same duration as the
original pandas ``Timedelta`` object. The precision of the resulting
object is in nanoseconds, which is the default
time resolution used by pandas for ``Timedelta`` objects, ensuring
high precision for time-based calculations.
See Also
--------
to_timedelta : Convert argument to timedelta format.
Timedelta : Represents a duration between two dates or times.
DatetimeIndex: Index of datetime64 data.
Timedelta.components : Return a components namedtuple-like
of a single timedelta.
Examples
--------
Expand All @@ -800,6 +813,14 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
>>> tdelta_idx.to_pytimedelta()
array([datetime.timedelta(days=1), datetime.timedelta(days=2),
datetime.timedelta(days=3)], dtype=object)
>>> tidx = pd.TimedeltaIndex(data=["1 days 02:30:45", "3 days 04:15:10"])
>>> tidx
TimedeltaIndex(['1 days 02:30:45', '3 days 04:15:10'],
dtype='timedelta64[ns]', freq=None)
>>> tidx.to_pytimedelta()
array([datetime.timedelta(days=1, seconds=9045),
datetime.timedelta(days=3, seconds=15310)], dtype=object)
"""
return ints_to_pytimedelta(self._ndarray)

Expand Down
63 changes: 59 additions & 4 deletions pandas/core/dtypes/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,13 +113,24 @@ def is_file_like(obj: object) -> bool:
Parameters
----------
obj : The object to check
obj : object
The object to check for file-like properties.
This can be any Python object, and the function will
check if it has attributes typically associated with
file-like objects (e.g., `read`, `write`, `__iter__`).
Returns
-------
bool
Whether `obj` has file-like properties.
See Also
--------
api.types.is_dict_like : Check if the object is dict-like.
api.types.is_hashable : Return True if hash(obj) will succeed, False otherwise.
api.types.is_named_tuple : Check if the object is a named tuple.
api.types.is_iterator : Check if the object is an iterator.
Examples
--------
>>> import io
Expand All @@ -142,13 +153,24 @@ def is_re(obj: object) -> TypeGuard[Pattern]:
Parameters
----------
obj : The object to check
obj : object
The object to check for being a regex pattern. Typically,
this would be an object that you expect to be a compiled
pattern from the `re` module.
Returns
-------
bool
Whether `obj` is a regex pattern.
See Also
--------
api.types.is_float : Return True if given object is float.
api.types.is_iterator : Check if the object is an iterator.
api.types.is_integer : Return True if given object is integer.
api.types.is_re_compilable : Check if the object can be compiled
into a regex pattern instance.
Examples
--------
>>> from pandas.api.types import is_re
Expand Down Expand Up @@ -275,13 +297,22 @@ def is_dict_like(obj: object) -> bool:
Parameters
----------
obj : The object to check
obj : object
The object to check. This can be any Python object,
and the function will determine whether it
behaves like a dictionary.
Returns
-------
bool
Whether `obj` has dict-like properties.
See Also
--------
api.types.is_list_like : Check if the object is list-like.
api.types.is_file_like : Check if the object is a file-like.
api.types.is_named_tuple : Check if the object is a named tuple.
Examples
--------
>>> from pandas.api.types import is_dict_like
Expand All @@ -308,13 +339,22 @@ def is_named_tuple(obj: object) -> bool:
Parameters
----------
obj : The object to check
obj : object
The object that will be checked to determine
whether it is a named tuple.
Returns
-------
bool
Whether `obj` is a named tuple.
See Also
--------
api.types.is_dict_like: Check if the object is dict-like.
api.types.is_hashable: Return True if hash(obj)
will succeed, False otherwise.
api.types.is_categorical_dtype : Check if the dtype is categorical.
Examples
--------
>>> from collections import namedtuple
Expand All @@ -340,9 +380,24 @@ def is_hashable(obj: object) -> TypeGuard[Hashable]:
Distinguish between these and other types by trying the call to hash() and
seeing if they raise TypeError.
Parameters
----------
obj : object
The object to check for hashability. Any Python object can be passed here.
Returns
-------
bool
True if object can be hashed (i.e., does not raise TypeError when
passed to hash()), and False otherwise (e.g., if object is mutable
like a list or dictionary).
See Also
--------
api.types.is_float : Return True if given object is float.
api.types.is_iterator : Check if the object is an iterator.
api.types.is_list_like : Check if the object is list-like.
api.types.is_dict_like : Check if the object is dict-like.
Examples
--------
Expand Down
Loading

0 comments on commit 4c83b37

Please sign in to comment.