diff --git a/doc/source/data/examples/batch_inference_object_detection.ipynb b/doc/source/data/examples/batch_inference_object_detection.ipynb index 71ad25620694..922bb47a6c7b 100644 --- a/doc/source/data/examples/batch_inference_object_detection.ipynb +++ b/doc/source/data/examples/batch_inference_object_detection.ipynb @@ -429,7 +429,7 @@ "from torchvision.transforms.functional import to_pil_image\n", "\n", "labels = [weights.meta[\"categories\"][i] for i in prediction[\"labels\"]]\n", - "box = draw_bounding_boxes(img, \n", + "box = draw_bounding_boxes(img,\n", " boxes=prediction[\"boxes\"],\n", " labels=labels,\n", " colors=\"red\",\n", @@ -444,7 +444,7 @@ "source": [ "## Scaling with Ray Data\n", "\n", - "Then let's see how to scale the previous example to a large set of images. We will use Ray Data to do batch inference in a distributed fashion, leveraging all the CPU and GPU resources in our cluster.\n", + "Then let's see how to scale the previous example to a large set of images. We will use Ray Data to do batch inference in a streaming and distributed fashion, leveraging all the CPU and GPU resources in our cluster.\n", "\n", "### Loading the Image Dataset\n", "\n", @@ -536,7 +536,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Then we use the {meth}`map ` API to apply the function to the whole dataset. By using Ray Data's map, we can scale out the preprocessing to all the resources in our Ray cluster Note, the `map` method is lazy, it won't perform execution until we start to consume the results." + "Then we use the {meth}`map ` API to apply the function to the whole dataset. By using Ray Data's map, we can scale out the preprocessing to all the resources in our Ray cluster. Note, the `map` method is lazy, it won't perform execution until we start to consume the results." ] }, { @@ -944,9 +944,11 @@ "source": [ "ds = ds.map_batches(\n", " ObjectDetectionModel,\n", - " concurrency=4, # Use 4 GPUs. Change this number based on the number of GPUs in your cluster.\n", - " batch_size=4, # Use the largest batch size that can fit in GPU memory.\n", - " num_gpus=1, # Specify 1 GPU per model replica. Remove this if you are doing CPU inference.\n", + " # Use 4 GPUs. Change this number based on the number of GPUs in your cluster.\n", + " concurrency=4,\n", + " batch_size=4, # Use the largest batch size that can fit in GPU memory.\n", + " # Specify 1 GPU per model replica. Remove this if you are doing CPU inference.\n", + " num_gpus=1,\n", ")" ] },