Skip to content

Commit

Permalink
[docs] fix build (#34265)
Browse files Browse the repository at this point in the history
* [docs] fix build

Signed-off-by: Max Pumperla <[email protected]>

* fix doctests

Signed-off-by: Max Pumperla <[email protected]>

* last test

Signed-off-by: Max Pumperla <[email protected]>

* lint

Signed-off-by: Max Pumperla <[email protected]>

* Update doc/source/rllib/package_ref/rl_modules.rst

Co-authored-by: kourosh hakhamaneshi <[email protected]>
Signed-off-by: Max Pumperla <[email protected]>

* fixes

* revert diff

* whitespace

---------

Signed-off-by: Max Pumperla <[email protected]>
Signed-off-by: Philipp Moritz <[email protected]>
Co-authored-by: kourosh hakhamaneshi <[email protected]>
Co-authored-by: Philipp Moritz <[email protected]>
  • Loading branch information
3 people authored Apr 12, 2023
1 parent d397e3f commit 0c56183
Show file tree
Hide file tree
Showing 5 changed files with 18 additions and 13 deletions.
7 changes: 4 additions & 3 deletions doc/source/data/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ transform datasets. Ray executes transformations in parallel for performance at

import pandas as pd

# Find rows with spepal length < 5.5 and petal length > 3.5.
# Find rows with sepal length < 5.5 and petal length > 3.5.
def transform_batch(df: pd.DataFrame) -> pd.DataFrame:
return df[(df["sepal length (cm)"] < 5.5) & (df["petal length (cm)"] > 3.5)]

Expand All @@ -62,8 +62,8 @@ transform datasets. Ray executes transformations in parallel for performance at
.. testoutput::

MapBatches(transform_batch)
+- Dataset(
num_blocks=...,
+- Datastream(
num_blocks=1,
num_rows=150,
schema={
sepal length (cm): double,
Expand All @@ -74,6 +74,7 @@ transform datasets. Ray executes transformations in parallel for performance at
}
)


To learn more about transforming datasets, read
:ref:`Transforming datasets <transforming_datasets>`.

Expand Down
6 changes: 3 additions & 3 deletions doc/source/data/glossary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Ray Datasets Glossary

>>> import ray
>>> ray.data.from_items(["spam", "ham", "eggs"])
Dataset(num_blocks=3, num_rows=3, schema=<class 'str'>)
MaterializedDatastream(num_blocks=3, num_rows=3, schema=<class 'str'>)

Tensor Dataset
A Dataset that represents a collection of ndarrays.
Expand All @@ -119,7 +119,7 @@ Ray Datasets Glossary
>>> import numpy as np
>>> import ray
>>> ray.data.from_numpy(np.zeros((100, 32, 32, 3)))
Dataset(
MaterializedDatastream(
num_blocks=1,
num_rows=100,
schema={__value__: ArrowTensorType(shape=(32, 32, 3), dtype=double)}
Expand All @@ -132,7 +132,7 @@ Ray Datasets Glossary

>>> import ray
>>> ray.data.read_csv("s3://anonymous@air-example-data/iris.csv")
Dataset(
Datastream(
num_blocks=1,
num_rows=150,
schema={
Expand Down
8 changes: 6 additions & 2 deletions python/ray/data/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -440,8 +440,12 @@ def map_batches(
... "age": [4, 14, 9]
... })
>>> ds = ray.data.from_pandas(df)
>>> ds
Datastream(num_blocks=1, num_rows=3, schema={name: object, age: int64})
>>> ds # doctest: +SKIP
MaterializedDatastream(
num_blocks=1,
num_rows=3,
schema={name: object, age: int64}
)
Call :meth:`.default_batch_format` to determine the default batch
type.
Expand Down
8 changes: 4 additions & 4 deletions python/ray/data/dataset_iterator.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ class DataIterator(abc.ABC):
>>> import ray
>>> ds = ray.data.range(5)
>>> ds
Dataset(num_blocks=5, num_rows=5, schema=<class 'int'>)
Datastream(num_blocks=5, num_rows=5, schema=<class 'int'>)
>>> ds.iterator()
DataIterator(Dataset(num_blocks=5, num_rows=5, schema=<class 'int'>))
DataIterator(Datastream(num_blocks=5, num_rows=5, schema=<class 'int'>))
>>> ds = ds.repeat(); ds
DatasetPipeline(num_windows=inf, num_stages=2)
>>> ds.iterator()
Expand Down Expand Up @@ -648,7 +648,7 @@ def to_tf(
... "s3://anonymous@air-example-data/iris.csv"
... )
>>> it = ds.iterator(); it
DataIterator(Dataset(
DataIterator(Datastream(
num_blocks=1,
num_rows=150,
schema={
Expand Down Expand Up @@ -679,7 +679,7 @@ def to_tf(
>>> it = preprocessor.transform(ds).iterator()
>>> it
DataIterator(Concatenator
+- Dataset(
+- Datastream(
num_blocks=1,
num_rows=150,
schema={
Expand Down
2 changes: 1 addition & 1 deletion python/ray/train/torch/torch_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ def train_loop_per_worker():
best_checkpoint_loss = result.metrics['loss']
# Assert loss is less 0.09
assert best_checkpoint_loss <= 0.09
assert best_checkpoint_loss <= 0.09 # doctest: +SKIP
.. testoutput::
:hide:
Expand Down

0 comments on commit 0c56183

Please sign in to comment.