Skip to content

Commit

Permalink
Fix links in Dask cuDF documentation (#16929)
Browse files Browse the repository at this point in the history
More follow-up fixes to the recent Dask-cuDF documentation additions.

Authors:
  - Richard (Rick) Zamora (https://github.com/rjzamora)
  - GALI PREM SAGAR (https://github.com/galipremsagar)

Approvers:
  - GALI PREM SAGAR (https://github.com/galipremsagar)
  - Vyas Ramasubramani (https://github.com/vyasr)

URL: #16929
  • Loading branch information
rjzamora committed Sep 26, 2024
1 parent b00a718 commit 742eaad
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 12 deletions.
15 changes: 9 additions & 6 deletions docs/dask_cudf/source/best_practices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ representations, native cuDF spilling may be insufficient. For these cases,
`JIT-unspill <https://docs.rapids.ai/api/dask-cuda/nightly/spilling/#jit-unspill>`__
is likely to produce better protection from out-of-memory (OOM) errors.
Please see `Dask-CUDA's spilling documentation
<https://docs.rapids.ai/api/dask-cuda/24.10/spilling/>`__ for further details
<https://docs.rapids.ai/api/dask-cuda/stable/spilling/>`__ for further details
and guidance.

Use RMM
Expand Down Expand Up @@ -160,7 +160,7 @@ of the underlying task graph to materialize the collection.

:func:`sort_values` / :func:`set_index` : These operations both require Dask to
eagerly collect quantile information about the column(s) being targeted by the
global sort operation. See `Avoid Sorting`__ for notes on sorting considerations.
global sort operation. See the next section for notes on sorting considerations.

.. note::
When using :func:`set_index`, be sure to pass in ``sort=False`` whenever the
Expand Down Expand Up @@ -297,11 +297,14 @@ bottleneck is typically device-to-host memory spilling.
Although every workflow is different, the following guidelines
are often recommended:

* `Use a distributed cluster with Dask-CUDA workers <Use Dask-CUDA>`_
* `Use native cuDF spilling whenever possible <Enable cuDF Spilling>`_
* Use a distributed cluster with `Dask-CUDA <https://docs.rapids.ai/api/dask-cuda/stable/>`__ workers

* Use native cuDF spilling whenever possible (`Dask-CUDA spilling documentation <https://docs.rapids.ai/api/dask-cuda/stable/spilling/>`__)

* Avoid shuffling whenever possible
* Use ``split_out=1`` for low-cardinality groupby aggregations
* Use ``broadcast=True`` for joins when at least one collection comprises a small number of partitions (e.g. ``<=5``)
* Use ``split_out=1`` for low-cardinality groupby aggregations
* Use ``broadcast=True`` for joins when at least one collection comprises a small number of partitions (e.g. ``<=5``)

* `Use UCX <https://docs.rapids.ai/api/dask-cuda/nightly/examples/ucx/>`__ if communication is a bottleneck.

.. note::
Expand Down
1 change: 1 addition & 0 deletions docs/dask_cudf/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@
"cudf": ("https://docs.rapids.ai/api/cudf/stable/", None),
"dask": ("https://docs.dask.org/en/stable/", None),
"pandas": ("https://pandas.pydata.org/docs/", None),
"dask-cuda": ("https://docs.rapids.ai/api/dask-cuda/stable/", None),
}

numpydoc_show_inherited_class_members = True
Expand Down
11 changes: 5 additions & 6 deletions docs/dask_cudf/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,9 @@ as the ``"cudf"`` dataframe backend for
Neither Dask cuDF nor Dask DataFrame provide support for multi-GPU
or multi-node execution on their own. You must also deploy a
`dask.distributed <https://distributed.dask.org/en/stable/>`__ cluster
to leverage multiple GPUs. We strongly recommend using `Dask-CUDA
<https://docs.rapids.ai/api/dask-cuda/stable/>`__ to simplify the
setup of the cluster, taking advantage of all features of the GPU
and networking hardware.
to leverage multiple GPUs. We strongly recommend using :doc:`dask-cuda:index`
to simplify the setup of the cluster, taking advantage of all features
of the GPU and networking hardware.

If you are familiar with Dask and `pandas <pandas.pydata.org>`__ or
`cuDF <https://docs.rapids.ai/api/cudf/stable/>`__, then Dask cuDF
Expand Down Expand Up @@ -161,7 +160,7 @@ out-of-core computing. This also means that the compute tasks can be
executed in parallel over a multi-GPU cluster.

In order to execute your Dask workflow on multiple GPUs, you will
typically need to use `Dask-CUDA <https://docs.rapids.ai/api/dask-cuda/stable/>`__
typically need to use :doc:`dask-cuda:index`
to deploy distributed Dask cluster, and
`Distributed <https://distributed.dask.org/en/stable/client.html>`__
to define a client object. For example::
Expand Down Expand Up @@ -192,7 +191,7 @@ to define a client object. For example::
<https://distributed.dask.org/en/stable/manage-computation.html>`__
for more details.

Please see the `Dask-CUDA <https://docs.rapids.ai/api/dask-cuda/stable/>`__
Please see the :doc:`dask-cuda:index`
documentation for more information about deploying GPU-aware clusters
(including `best practices
<https://docs.rapids.ai/api/dask-cuda/stable/examples/best-practices/>`__).
Expand Down

0 comments on commit 742eaad

Please sign in to comment.