Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add doctest #4618

Merged
merged 36 commits into from
Mar 29, 2022
Merged
Show file tree
Hide file tree
Changes from 32 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
fbdfa1e
Change to doctest for
yuqli Oct 14, 2020
a8c10e9
Add commit message to changelog.md
yuqli Oct 14, 2020
788442b
Fix style error
yuqli Oct 16, 2020
b980af6
style fix
yuqli Oct 16, 2020
c6a109b
Merge branch 'branch-0.17' into branch-0.16
dantegd Dec 2, 2020
fa363cb
Apply suggestions from code review
mdemoret-nv Feb 18, 2021
0a00cfb
Merge branch 'branch-0.19' into branch-0.16
mdemoret-nv Feb 18, 2021
cdf54bc
Minor cleanup of examples and fixing of warnings.
mdemoret-nv Feb 19, 2021
77145fa
Fixing copyright
mdemoret-nv Feb 19, 2021
f8abbbe
Updating copyright year
mdemoret-nv Feb 19, 2021
a653ae3
Merge branch 'branch-22.04' into HEAD
lowener Feb 24, 2022
9f4b9b3
Fix existing doctest
lowener Mar 1, 2022
82c93b1
Add pytest file for doctest
lowener Mar 1, 2022
717b388
Add test_docstring.py
lowener Mar 2, 2022
72190fb
Add linear models doctest
lowener Mar 7, 2022
af39242
Add docstring for modules common, linear_model, metrics, preprocessin…
lowener Mar 8, 2022
3a08019
Merge branch 'branch-22.04' into 22.04-doctest
lowener Mar 8, 2022
d115033
Add Nearest Neighbors to doctest
lowener Mar 9, 2022
c0aa02f
Add doctest to solvers module
lowener Mar 9, 2022
c19cffd
Add doctest format to SVM
lowener Mar 10, 2022
278c418
Add doctest to decomposion
lowener Mar 10, 2022
ba468ec
Fix copyright
lowener Mar 10, 2022
bd5621e
Fix style
lowener Mar 10, 2022
d041354
Add doctest for MNMG models
lowener Mar 11, 2022
04a7a28
fix style
lowener Mar 11, 2022
999ac03
Fix copyright
lowener Mar 11, 2022
2101e6c
Fix Dask UMAP doctest
lowener Mar 11, 2022
7c9177b
Add back code-block and fix CI issue
lowener Mar 14, 2022
42d9c6c
Add multiclass doctest
lowener Mar 15, 2022
facb0ae
Fix copyright
lowener Mar 15, 2022
d7191bd
Add client.close() for dask doctest
lowener Mar 16, 2022
a131da4
Merge branch 'branch-22.04' into 22.04-doctest
lowener Mar 16, 2022
4391b6c
Simplify KMeans example doc and dev guide
lowener Mar 24, 2022
e0e400f
Merge branch 'branch-22.04' into 22.04-doctest
lowener Mar 28, 2022
d0fc8fa
Fix pydoc svc
lowener Mar 29, 2022
ffbf3cb
Fix copyright
lowener Mar 29, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions python/cuml/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,3 +111,62 @@ def __getattr__(name):
return _global_settings_data.settings

raise AttributeError(f"module {__name__} has no attribute {name}")


__all__ = [
# Modules
"common",
"metrics",
"multiclass",
"naive_bayes",
"preprocessing",
# Classes
"AgglomerativeClustering",
"ARIMA",
"AutoARIMA",
"Base",
"CD",
"cuda",
"DBSCAN",
"ElasticNet",
"ExponentialSmoothing",
"ForestInference",
"GaussianRandomProjection",
"Handle",
"HDBSCAN",
"IncrementalPCA",
"KernelDensity",
"KernelExplainer",
"KernelRidge",
"KMeans",
"KNeighborsClassifier",
"KNeighborsRegressor",
"Lasso",
"LinearRegression",
"LinearSVC",
"LinearSVR",
"LogisticRegression",
"MBSGDClassifier",
"MBSGDRegressor",
"NearestNeighbors",
"PCA",
"PermutationExplainer",
"QN",
"RandomForestClassifier",
"RandomForestRegressor",
"Ridge",
"SGD",
"SparseRandomProjection",
"SVC",
"SVR",
"TruncatedSVD",
"TSNE",
"UMAP",
# Functions
"johnson_lindenstrauss_min_dim",
"make_arima",
"make_blobs",
"make_classification",
"make_regression",
"stationarity",
]
21 changes: 17 additions & 4 deletions python/cuml/_thirdparty/sklearn/preprocessing/_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,9 @@ class MinMaxScaler(TransformerMixin,
Examples
--------
>>> from cuml.preprocessing import MinMaxScaler
>>> import cupy as cp
>>> data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]]
>>> data = cp.array(data)
>>> scaler = MinMaxScaler()
>>> print(scaler.fit(data))
MinMaxScaler()
Expand All @@ -269,7 +271,7 @@ class MinMaxScaler(TransformerMixin,
[0.25 0.25]
[0.5 0.5 ]
[1. 1. ]]
>>> print(scaler.transform([[2, 2]]))
>>> print(scaler.transform(cp.array([[2, 2]])))
[[1.5 0. ]]

See also
Expand Down Expand Up @@ -577,7 +579,9 @@ class StandardScaler(TransformerMixin,
Examples
--------
>>> from cuml.preprocessing import StandardScaler
>>> import cupy as cp
>>> data = [[0, 0], [0, 0], [1, 1], [1, 1]]
>>> data = cp.array(data)
>>> scaler = StandardScaler()
>>> print(scaler.fit(data))
StandardScaler()
Expand All @@ -588,7 +592,7 @@ class StandardScaler(TransformerMixin,
[-1. -1.]
[ 1. 1.]
[ 1. 1.]]
>>> print(scaler.transform([[2, 2]]))
>>> print(scaler.transform(cp.array([[2, 2]])))
[[3. 3.]]

See also
Expand Down Expand Up @@ -649,7 +653,7 @@ def fit(self, X, y=None) -> "StandardScaler":
The data used to compute the mean and standard deviation
used for later scaling along the features axis.

y
y : None
Ignored
"""

Expand Down Expand Up @@ -893,9 +897,11 @@ class MaxAbsScaler(TransformerMixin,
Examples
--------
>>> from cuml.preprocessing import MaxAbsScaler
>>> import cupy as cp
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> X = cp.array(X)
>>> transformer = MaxAbsScaler().fit(X)
>>> transformer
MaxAbsScaler()
Expand Down Expand Up @@ -1151,9 +1157,11 @@ class RobustScaler(TransformerMixin,
Examples
--------
>>> from cuml.preprocessing import RobustScaler
>>> import cupy as cp
>>> X = [[ 1., -2., 2.],
... [ -2., 1., 3.],
... [ 4., 1., -2.]]
>>> X = cp.array(X)
>>> transformer = RobustScaler().fit(X)
>>> transformer
RobustScaler()
Expand Down Expand Up @@ -1786,9 +1794,11 @@ class Normalizer(TransformerMixin,
Examples
--------
>>> from cuml.preprocessing import Normalizer
>>> import cupy as cp
>>> X = [[4, 1, 2, 2],
... [1, 3, 9, 3],
... [5, 7, 5, 1]]
>>> X = cp.array(X)
>>> transformer = Normalizer().fit(X) # fit does nothing.
>>> transformer
Normalizer()
Expand Down Expand Up @@ -1913,9 +1923,11 @@ class Binarizer(TransformerMixin,
Examples
--------
>>> from cuml.preprocessing import Binarizer
>>> import cupy as cp
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> X = cp.array(X)
>>> transformer = Binarizer().fit(X) # fit does nothing.
>>> transformer
Binarizer()
Expand Down Expand Up @@ -1996,7 +2008,8 @@ def add_dummy_feature(X, value=1.0):
--------

>>> from cuml.preprocessing import add_dummy_feature
>>> add_dummy_feature([[0, 1], [1, 0]])
>>> import cupy as cp
>>> add_dummy_feature(cp.array([[0, 1], [1, 0]]))
array([[1., 0., 1.],
[1., 1., 0.]])
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,19 +103,22 @@ class KBinsDiscretizer(TransformerMixin,

Examples
--------
>>> from cuml.preprocessing import KBinsDiscretizer
>>> import numpy as np
>>> X = [[-2, 1, -4, -1],
... [-1, 2, -3, -0.5],
... [ 0, 3, -2, 0.5],
... [ 1, 4, -1, 2]]
>>> X = np.array(X)
>>> est = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform')
>>> est.fit(X)
KBinsDiscretizer(...)
>>> Xt = est.transform(X)
>>> Xt # doctest: +SKIP
array([[ 0., 0., 0., 0.],
[ 1., 1., 1., 0.],
[ 2., 2., 2., 1.],
[ 2., 2., 2., 2.]])
>>> Xt
array([[0, 0, 0, 0],
[1, 1, 1, 0],
[2, 2, 2, 1],
[2, 2, 2, 2]], dtype=int32)

Sometimes it may be useful to convert the data back into the original
feature space. The ``inverse_transform`` function converts the binned
Expand Down
43 changes: 20 additions & 23 deletions python/cuml/cluster/dbscan.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -111,29 +111,26 @@ class DBSCAN(Base,

.. code-block:: python

# Both import methods supported
from cuml import DBSCAN
from cuml.cluster import DBSCAN

import cudf
import numpy as np

gdf_float = cudf.DataFrame()
gdf_float['0'] = np.asarray([1.0,2.0,5.0], dtype = np.float32)
gdf_float['1'] = np.asarray([4.0,2.0,1.0], dtype = np.float32)
gdf_float['2'] = np.asarray([4.0,2.0,1.0], dtype = np.float32)

dbscan_float = DBSCAN(eps = 1.0, min_samples = 1)
dbscan_float.fit(gdf_float)
print(dbscan_float.labels_)

Output:

.. code-block:: python

0 0
1 1
2 2
>>> # Both import methods supported
>>> from cuml import DBSCAN
>>> from cuml.cluster import DBSCAN
>>>
>>> import cudf
>>> import numpy as np
>>>
>>> gdf_float = cudf.DataFrame()
>>> gdf_float['0'] = np.asarray([1.0,2.0,5.0], dtype = np.float32)
>>> gdf_float['1'] = np.asarray([4.0,2.0,1.0], dtype = np.float32)
>>> gdf_float['2'] = np.asarray([4.0,2.0,1.0], dtype = np.float32)
>>>
>>> dbscan_float = DBSCAN(eps = 1.0, min_samples = 1)
>>> dbscan_float.fit(gdf_float)
DBSCAN()
>>> dbscan_float.labels_
0 0
1 1
2 2
dtype: int32

Parameters
-----------
Expand Down
10 changes: 4 additions & 6 deletions python/cuml/cluster/hdbscan.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,6 @@ class HDBSCAN(Base, ClusterMixin, CMajorInputTagMixin):

alpha : float, optional (default=1.0)
A distance scaling parameter as used in robust single linkage.
See [2]_ for more information.

verbose : int or boolean, default=False
Sets logging level. It must be one of `cuml.common.logger.level_*`.
Expand All @@ -311,10 +310,9 @@ class HDBSCAN(Base, ClusterMixin, CMajorInputTagMixin):

cluster_selection_epsilon : float, optional (default=0.0)
A distance threshold. Clusters below this value will be merged.
See [3]_ for more information. Note that this should not be used
if we want to predict the cluster labels for new points in future
(e.g. using approximate_predict), as the approximate_predict function
is not aware of this argument.
Note that this should not be used if we want to predict the cluster
labels for new points in future (e.g. using approximate_predict), as
the approximate_predict function is not aware of this argument.

max_cluster_size : int, optional (default=0)
A limit to the size of clusters returned by the eom algorithm.
Expand All @@ -325,7 +323,7 @@ class HDBSCAN(Base, ClusterMixin, CMajorInputTagMixin):
for new points in future (e.g. using approximate_predict), as
the approximate_predict function is not aware of this argument.

metric : string or callable, optional (default='minkowski')
metric : string or callable, optional (default='euclidean')
The metric to use when calculating distance between instances in a
feature array. If metric is a string or callable, it must be one of
the options allowed by metrics.pairwise.pairwise_distances for its
Expand Down
Loading