Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Don't Merge] Tutorials Refresh. Creating for preview. #902

Closed
wants to merge 15 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .jenkins/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,17 @@ if [[ "${JOB_BASE_NAME}" == *worker_* ]]; then
FILES_TO_RUN+=($(basename $filename .py))
fi
count=$((count+1))
done
done
for filename in $(find recipes_source/ -name '*.py' -not -path '*/data/*'); do
if [ $(($count % $NUM_WORKERS)) != $WORKER_ID ]; then
echo "Removing runnable code from "$filename
python $DIR/remove_runnable_code.py $filename $filename
else
echo "Keeping "$filename
FILES_TO_RUN+=($(basename $filename .py))
fi
count=$((count+1))
done
echo "FILES_TO_RUN: " ${FILES_TO_RUN[@]}

Expand Down
162 changes: 117 additions & 45 deletions beginner_source/blitz/tensor_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,101 +3,147 @@
What is PyTorch?
================

It’s a Python-based scientific computing package targeted at two sets of
It is a open source machine learning framework that accelerates the
path from research prototyping to production deployment.

PyTorch is built as a Python-based scientific computing package targeted at two sets of
audiences:

- A replacement for NumPy to use the power of GPUs
- a deep learning research platform that provides maximum flexibility
and speed
- Those who are looking for a replacement for NumPy to use the power of GPUs.
- Researchers who want to build with a deep learning platform that provides maximum flexibility
and speed.

Getting Started
---------------

In this section of the tutorial, we will introduce the concept of a tensor in PyTorch, and its operations.

Tensors
^^^^^^^

Tensors are similar to NumPy’s ndarrays, with the addition being that
Tensors can also be used on a GPU to accelerate computing.
A tensor is a generic n-dimensional array. Tensors in PyTorch are similar to NumPy’s ndarrays,
with the addition being that tensors can also be used on a GPU to accelerate computing.

To see the behavior of tensors, we will first need to import PyTorch into our program.
"""

from __future__ import print_function
import torch

###############################################################
# .. note::
# An uninitialized matrix is declared,
# but does not contain definite known
# values before it is used. When an
# uninitialized matrix is created,
# whatever values were in the allocated
# memory at the time will appear as the initial values.
"""
We import ``future`` here to help port our code from Python 2 to Python 3.
For more details, see the `Python-Future technical documentation <https://python-future.org/quickstart.html>`_.

Let's take a look at how we can create tensors.
"""

###############################################################
# Construct a 5x3 matrix, uninitialized:
# First, construct a 5x3 empty matrix:

x = torch.empty(5, 3)
print(x)

"""
``torch.empty`` creates an uninitialized matrix of type tensor.
When an empty tensor is declared, it does not contain definite known values
before you populate it. The values in the empty tensor are those that were in
the allocated memory at the time of initialization.
"""

###############################################################
# Construct a randomly initialized matrix:
# Now, construct a randomly initialized matrix:

x = torch.rand(5, 3)
print(x)

"""
``torch.rand`` creates an initialized matrix of type tensor with a random
sampling of values.
"""

###############################################################
# Construct a matrix filled zeros and of dtype long:

x = torch.zeros(5, 3, dtype=torch.long)
print(x)

"""
``torch.zeros`` creates an initialized matrix of type tensor with every
index having a value of zero.
"""

###############################################################
# Construct a tensor directly from data:
# Let's construct a tensor with data that we define ourselves:

x = torch.tensor([5.5, 3])
print(x)

"""
Our tensor can represent all types of data. This data can be an audio waveform, the
pixels of an image, even entities of a language.

PyTorch has packages that support these specific data types. For additional learning, see:
- `torchvision <https://pytorch.org/docs/stable/torchvision/index.html>`_
- `torchtext <https://pytorch.org/text/>`_
- `torchaudio <https://pytorch.org/audio/>`_
"""

###############################################################
# or create a tensor based on an existing tensor. These methods
# will reuse properties of the input tensor, e.g. dtype, unless
# new values are provided by user
# You can create a tensor based on an existing tensor. These methods
# reuse the properties of the input tensor, e.g. ``dtype``, unless
# new values are provided by the user.
#

x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes
x = x.new_ones(5, 3, dtype=torch.double)
print(x)

x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x) # result has the same size

"""
``tensor.new_*`` methods take in the size of the tensor and a ``dtype``,
returning a tensor filled with ones.

In this example,``torch.randn_like`` creates a new tensor based upon the
input tensor, and overrides the ``dtype`` to be a float. The output of
this method is a tensor of the same size and different ``dtype``.
"""

###############################################################
# Get its size:
# We can get the size of a tensor as a tuple:

print(x.size())

###############################################################
# .. note::
# ``torch.Size`` is in fact a tuple, so it supports all tuple operations.
# Since ``torch.Size`` is a tuple, it supports all tuple operations.
#
# Operations
# ^^^^^^^^^^
# There are multiple syntaxes for operations. In the following
# example, we will take a look at the addition operation.
# There are multiple syntaxes for operations that can be performed on tensors.
# In the following example, we will take a look at the addition operation.
#
# Addition: syntax 1
# First, let's try using the ``+`` operator.

y = torch.rand(5, 3)
print(x + y)

###############################################################
# Addition: syntax 2
# Using the ``+`` operator should have the same output as using the
# ``add()`` method.

print(torch.add(x, y))

###############################################################
# Addition: providing an output tensor as argument
# You can also provide a tensor as an argument to the ``add()``
# method that will contain the data of the output operation.

result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)

###############################################################
# Addition: in-place
# Finally, you can perform this operation in-place.

# adds x to y
y.add_(x)
Expand All @@ -107,21 +153,29 @@
# .. note::
# Any operation that mutates a tensor in-place is post-fixed with an ``_``.
# For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.
#
# You can use standard NumPy-like indexing with all bells and whistles!

###############################################################
# Similar to NumPy, tensors can be indexed using the standard
# Python ``x[i]`` syntax, where ``x`` is the array and ``i`` is the selection.
#
# That said, you can use NumPy-like indexing with all its bells and whistles!

print(x[:, 1])

###############################################################
# Resizing: If you want to resize/reshape tensor, you can use ``torch.view``:
# Resizing your tensors might be necessary for your data.
# If you want to resize or reshape tensor, you can use ``torch.view``:

x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())

###############################################################
# If you have a one element tensor, use ``.item()`` to get the value as a
# Python number
# You can access the Python number-value of a one-element tensor using ``.item()``.
# If you have a multidimensional tensor, see the
# `tolist() <https://pytorch.org/docs/stable/tensors.html#torch.Tensor.tolist>`_ method.

x = torch.randn(1)
print(x)
print(x.item())
Expand All @@ -130,43 +184,55 @@
# **Read later:**
#
#
# 100+ Tensor operations, including transposing, indexing, slicing,
# mathematical operations, linear algebra, random numbers, etc.,
# are described
# `here <https://pytorch.org/docs/torch>`_.
# This was just a sample of the 100+ Tensor operations you have
# access to in PyTorch. There are many others, including transposing,
# indexing, slicing, mathematical operations, linear algebra,
# random numbers, and more. Read and explore more about them in our
# `technical documentation <https://pytorch.org/docs/torch>`_.
#
# NumPy Bridge
# ------------
#
# Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
# As mentioned earlier, one of the benefits of using PyTorch is that it
# is built to provide a seemless transition from NumPy.
#
# For example, converting a Torch Tensor to a NumPy array (and vice versa)
# is a breeze.
#
# The Torch Tensor and NumPy array will share their underlying memory
# locations (if the Torch Tensor is on CPU), and changing one will change
# locations (if the Torch Tensor is on CPU). That means, changing one will change
# the other.
#
# Let's see this in action.
#
# Converting a Torch Tensor to a NumPy Array
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# First, construct a 1-dimensional tensor populated with ones.

a = torch.ones(5)
print(a)

###############################################################
#
# Now, let's construct a NumPy array based off of that tensor.

b = a.numpy()
print(b)

###############################################################
# See how the numpy array changed in value.
# Let's see how they share their memory locations. Add ``1`` to the torch tensor.

a.add_(1)
print(a)
print(b)

###############################################################
# Take note how the numpy array also changed in value.

###############################################################
# Converting NumPy Array to Torch Tensor
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# See how changing the np array changed the Torch Tensor automatically
# Try the same thing for NumPy to Torch Tensor.
# See how changing the NumPy array changed the Torch Tensor automatically as well.

import numpy as np
a = np.ones(5)
Expand All @@ -176,15 +242,17 @@
print(b)

###############################################################
# All the Tensors on the CPU except a CharTensor support converting to
# All the Tensors on the CPU (except a CharTensor) support converting to
# NumPy and back.
#
# CUDA Tensors
# ------------
#
# Tensors can be moved onto any device using the ``.to`` method.
# The following code block can be run by changing the runtime in
# your notebook to "GPU" or greater.

# let us run this cell only if CUDA is available
# This cell will run only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
Expand All @@ -193,3 +261,7 @@
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!

###############################################################
# Now that you have had time to experiment with Tensors in PyTorch, let's take
# a look at Automatic Differentiation.
12 changes: 7 additions & 5 deletions conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,15 @@
import torch
import glob
import shutil
from custom_directives import IncludeDirective, GalleryItemDirective, CustomGalleryItemDirective
from custom_directives import IncludeDirective, GalleryItemDirective, CustomGalleryItemDirective, CustomCalloutItemDirective, CustomCardItemDirective


try:
import torchvision
except ImportError:
import warnings
warnings.warn('unable to load "torchvision" package')
import pytorch_sphinx_theme
#import pytorch_sphinx_theme

# -- General configuration ------------------------------------------------

Expand All @@ -61,8 +61,8 @@

sphinx_gallery_conf = {
'examples_dirs': ['beginner_source', 'intermediate_source',
'advanced_source'],
'gallery_dirs': ['beginner', 'intermediate', 'advanced'],
'advanced_source', 'recipes_source'],
'gallery_dirs': ['beginner', 'intermediate', 'advanced', 'recipes'],
'filename_pattern': os.environ.get('GALLERY_PATTERN', r'tutorial.py'),
'backreferences_dir': False
}
Expand Down Expand Up @@ -162,7 +162,7 @@


html_theme = 'pytorch_sphinx_theme'
html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
html_theme_path = ['../../pytorch_sphinx_theme-1']
html_logo = '_static/img/pytorch-logo-dark.svg'
html_theme_options = {
'pytorch_project': 'tutorials',
Expand Down Expand Up @@ -237,3 +237,5 @@ def setup(app):
app.add_directive('includenodoc', IncludeDirective)
app.add_directive('galleryitem', GalleryItemDirective)
app.add_directive('customgalleryitem', CustomGalleryItemDirective)
app.add_directive('customcarditem', CustomCardItemDirective)
app.add_directive('customcalloutitem', CustomCalloutItemDirective)
Loading