Skip to content

Commit

Permalink
More changes
Browse files Browse the repository at this point in the history
Signed-off-by: Joaquin Anton <[email protected]>
  • Loading branch information
jantonguirao committed Feb 16, 2021
1 parent cce6ba9 commit 95d7b8b
Show file tree
Hide file tree
Showing 11 changed files with 24 additions and 27 deletions.
2 changes: 1 addition & 1 deletion dali/python/nvidia/dali/plugin/base_iterator.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class _DaliBaseIterator(object):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
output_map : list of (str, str)
List of pairs (output_name, tag) which maps consecutive
Expand Down
6 changes: 3 additions & 3 deletions dali/python/nvidia/dali/plugin/mxnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ class DALIGenericIterator(_DALIMXNetIteratorBase):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
output_map : list of (str, str)
List of pairs (output_name, tag) which maps consecutive
Expand Down Expand Up @@ -396,7 +396,7 @@ class DALIClassificationIterator(DALIGenericIterator):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
size : int, default = -1
Number of samples in the shard for the wrapped pipeline (if there is more than one it is a sum)
Expand Down Expand Up @@ -537,7 +537,7 @@ class DALIGluonIterator(_DALIMXNetIteratorBase):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
size : int, default = -1
Number of samples in the shard for the wrapped pipeline (if there is more than one it is a sum)
Expand Down
4 changes: 2 additions & 2 deletions dali/python/nvidia/dali/plugin/paddle.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ class DALIGenericIterator(_DaliBaseIterator):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
output_map : list of str or pair of type (str, int)
The strings maps consecutive outputs of DALI pipelines to
Expand Down Expand Up @@ -385,7 +385,7 @@ class DALIClassificationIterator(DALIGenericIterator):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
size : int, default = -1
Number of samples in the shard for the wrapped pipeline (if there is more than one it is a sum)
Expand Down
4 changes: 2 additions & 2 deletions dali/python/nvidia/dali/plugin/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ class DALIGenericIterator(_DaliBaseIterator):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
output_map : list of str
List of strings which maps consecutive outputs
Expand Down Expand Up @@ -295,7 +295,7 @@ class DALIClassificationIterator(DALIGenericIterator):
Parameters
----------
pipelines : list of nvidia.dali.pipeline.Pipeline
pipelines : list of nvidia.dali.Pipeline
List of pipelines to use
size : int, default = -1
Number of samples in the shard for the wrapped pipeline (if there is more than one it is a sum)
Expand Down
4 changes: 2 additions & 2 deletions dali/python/nvidia/dali/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ class ScalarConstant(object):
Wrapper for a constant value that can be used in DALI :ref:`mathematical expressions`
and applied element-wise to the results of DALI Operators representing Tensors in
:meth:`nvidia.dali.pipeline.Pipeline.define_graph` step.
:meth:`nvidia.dali.Pipeline.define_graph` step.
ScalarConstant indicates what type should the value be treated as with respect
to type promotions. The actual values passed to the backend from python
Expand Down Expand Up @@ -445,7 +445,7 @@ def _is_scalar_value(value):

def Constant(value, dtype = None, shape = None, layout = None, device = None, **kwargs):
"""Wraps a constant value which can then be used in
:meth:`nvidia.dali.pipeline.Pipeline.define_graph` pipeline definition step.
:meth:`nvidia.dali.Pipeline.define_graph` pipeline definition step.
If the `value` argument is a scalar and neither `shape`, `layout` nor
`device` is provided, the function will return a :class:`ScalarConstant`
Expand Down
2 changes: 1 addition & 1 deletion dali/test/python/test_torch_pipeline_rnnt.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ def forward(self, inp, seq_len):
return x.to(dtype)


class RnntTrainPipeline(nvidia.dali.pipeline.Pipeline):
class RnntTrainPipeline(nvidia.dali.Pipeline):
def __init__(self,
device_id,
n_devices,
Expand Down
2 changes: 1 addition & 1 deletion docs/advanced_topics_performance_tuning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ To determine the amount of memory output that each operator needs, complete the

1) Create the pipeline by setting ``enable_memory_stats`` to True.
2) Query the pipeline for the operator's output memory statistics by calling the
:meth:`nvidia.dali.pipeline.Pipeline.executor_statistics` method on the pipeline.
:meth:`nvidia.dali.Pipeline.executor_statistics` method on the pipeline.

The ``max_real_memory_size`` value represents the biggest tensor in the batch for the outputs that
allocate memory per sample and not for the entire batch at the time or the average tensor size when
Expand Down
6 changes: 3 additions & 3 deletions docs/examples/getting started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,10 @@
" | in faster execution speed, but larger memory consumption.\n",
" | `exec_async` : bool, optional, default = True\n",
" | Whether to execute the pipeline asynchronously.\n",
" | This makes :meth:`nvidia.dali.pipeline.Pipeline.run` method\n",
" | This makes :meth:`nvidia.dali.Pipeline.run` method\n",
" | run asynchronously with respect to the calling Python thread.\n",
" | In order to synchronize with the pipeline one needs to call\n",
" | :meth:`nvidia.dali.pipeline.Pipeline.outputs` method.\n",
" | :meth:`nvidia.dali.Pipeline.outputs` method.\n",
" | `bytes_per_sample` : int, optional, default = 0\n",
" | A hint for DALI for how much memory to use for its tensors.\n",
" | `set_affinity` : bool, optional, default = False\n",
Expand All @@ -106,7 +106,7 @@
" | and `y` for mixed and gpu stages. It is not supported when both `exec_async`\n",
" | and `exec_pipelined` are set to `False`.\n",
" | Executor will buffer cpu and gpu stages separatelly,\n",
" | and will fill the buffer queues when the first :meth:`nvidia.dali.pipeline.Pipeline.run`\n",
" | and will fill the buffer queues when the first :meth:`nvidia.dali.Pipeline.run`\n",
" | is issued.\n",
" | \n",
" | Methods defined here:\n",
Expand Down
4 changes: 1 addition & 3 deletions docs/math.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,9 @@ Mathematical Expressions
^^^^^^^^^^^^^^^^^^^^^^^^

DALI allows you to use regular Python arithmetic operations and other mathematical functions in
the :meth:`~nvidia.dali.pipeline.Pipeline.define_graph` method on the values that are returned
the :meth:`~nvidia.dali.Pipeline.define_graph` method on the values that are returned
from invoking other operators.

Same expressions can be used with :ref:`functional api`.

The expressions that are used will be incorporated into the pipeline without needing to explicitly
instantiate operators and will describe the element-wise operations on Tensors.

Expand Down
13 changes: 5 additions & 8 deletions docs/pipeline.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,17 @@
Pipeline
========

.. currentmodule:: nvidia.dali

In DALI, any data processing task has a central object called Pipeline. Pipeline object is an
instance of :class:`nvidia.dali.Pipeline` or a derived class. Pipeline encapsulates the
data processing graph and the execution engine.

You can define a DALI Pipeline in the following ways:

#. by implementing a function that uses DALI operators inside and decorating it with the
:meth:`pipeline_def` decorator
#. by instantiating :class:`Pipeline` object directly, building the graph and setting the pipeline
outputs with :meth:`Pipeline.set_outputs`
#. by inheriting from :class:`Pipeline` class and overriding :meth:`Pipeline.define_graph`
(this is the legacy way of defining DALI Pipelines)

.. currentmodule:: nvidia.dali
#. By implementing a function that uses DALI operators inside and decorating it with the :meth:`pipeline_def` decorator.
#. By instantiating :class:`Pipeline` object directly, building the graph and setting the pipeline outputs with :meth:`Pipeline.set_outputs`.
#. By inheriting from :class:`Pipeline` class and overriding :meth:`Pipeline.define_graph` (this is the legacy way of defining DALI Pipelines).

.. autoclass:: Pipeline
:members:
Expand Down
4 changes: 3 additions & 1 deletion docs/supported_ops.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ Operations

.. currentmodule:: nvidia.dali

Operations can be used to define the data processing graphs within a DALI :ref:`Pipeline <pipeline>`.
Operations functions are used to define the data processing graph within a DALI :ref:`Pipeline <pipeline>`.
They accept as inputs and return as outputs :class:`~nvidia.dali.pipeline.DataNode` instances, which represent batches of Tensors.
It is worth noting that those operation functions can not be used to process data directly.

The following table lists all available operations available in DALI:

Expand Down

0 comments on commit 95d7b8b

Please sign in to comment.