Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update quickstart.rst #1460

Merged
merged 7 commits into from
Nov 13, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 13 additions & 16 deletions docs/source/quickstart.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Quick start
===========

Welcome to Ignite quick start guide that just gives essentials of getting a project up and running.
Welcome to **PyTorch-Ignite** quick start guide that just covers the essentials of getting a project up and walking through the code.

In several lines you can get your model training and validating:
In several lines of this given code, you can get your model trained and validated as shown below:

Code
----
Expand Down Expand Up @@ -47,12 +47,12 @@ Code
trainer.run(train_loader, max_epochs=100)


Complete code can be found in the file `examples/mnist/mnist.py <https://github.com/pytorch/ignite/blob/master/examples/mnist/mnist.py>`_.
**Note**: Complete code can be found in the file `examples/mnist/mnist.py <https://github.com/pytorch/ignite/blob/master/examples/mnist/mnist.py>`_.

Explanation
-----------

Now let's break up the code and review it in details. In the first 4 lines we define our model, training and validation
Now let's break up the code and review it in details. In the first 4 lines, we define our model, training and validation
datasets (as `torch.utils.data.DataLoader <https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader>`_), optimizer and loss function:

.. code-block:: python
Expand All @@ -62,7 +62,7 @@ datasets (as `torch.utils.data.DataLoader <https://pytorch.org/docs/stable/data.
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8)
criterion = nn.NLLLoss()

Next we define trainer and evaluator engines. In the above example we are using helper methods
Next we define trainer and evaluator engines. In this example, we are using helper methods as
:meth:`~ignite.engine.create_supervised_trainer` and :meth:`~ignite.engine.create_supervised_evaluator`:

.. code-block:: python
Expand Down Expand Up @@ -106,7 +106,7 @@ custom training/validation step logic:
evaluator = Engine(validation_step)


Note that the helper function :meth:`~ignite.engine.create_supervised_evaluator` to create an evaluator accepts an
Note that the helper function :meth:`~ignite.engine.create_supervised_evaluator` is to create an evaluator which accepts an
argument ``metrics``:

.. code-block:: python
Expand All @@ -116,13 +116,13 @@ argument ``metrics``:
'nll': Loss(loss)
}

where we define two metrics: *accuracy* and *loss* to compute on validation dataset. More information on
Here we define two metrics: *accuracy* and *loss* to compute on validation dataset. More information on
metrics can be found at :doc:`metrics`.


The most interesting part of the code snippet is adding event handlers. :class:`~ignite.engine.engine.Engine` allows to add handlers on
various events that triggered during the run. When an event is triggered, attached handlers (functions) are executed. Thus, for
logging purposes we added a function to be executed at the end of every ``log_interval``-th iteration:
various events that triggers during the run. When an event is triggered, attached handlers (functions) are executed. Thus, for
logging purposes we add a function to be executed at the end of every ``log_interval``-th iteration:

.. code-block:: python

Expand All @@ -139,7 +139,7 @@ or equivalently without the decorator

trainer.add_event_handler(Events.ITERATION_COMPLETED, log_training_loss)

When an epoch ends we want compute training and validation metrics [#f1]_. For that purpose we can run previously defined
When an epoch ends, we want to compute training and validation metrics [#f1]_. For that purpose, we can run previously defined
``evaluator`` on ``train_loader`` and ``val_loader``. Therefore we attach two additional handlers to the trainer on epoch
complete event:

Expand Down Expand Up @@ -182,12 +182,10 @@ Finally, we start the engine on the training dataset and run it during 100 epoch

.. rubric:: Footnotes

.. [#f1]

In this example we follow a pattern that requires a second pass through the training set. This
.. [#f1] In this example, we follow a pattern that requires a second pass through the training set. This
could be expensive on large datasets (even taking a subset). Another more common pattern is to accumulate
measures online over an epoch in the training loop. In this case metrics are aggregated on a moving model,
and thus, we do not want to encourage this pattern. However, if user still would like to implement the
measures online over an epoch in the training loop. In this case, metrics are aggregated on a moving model,
and thus, we do not want to encourage this pattern. However, if a user still likes to implement the
last pattern, it can be easily done by attaching metrics to the trainer as following:

.. code-block:: python
Expand All @@ -210,4 +208,3 @@ Finally, we start the engine on the training dataset and run it during 100 epoch
}
for name, metric in val_metrics.items():
metric.attach(trainer, name)