Skip to content

Commit 3dd2843

Browse files
author
Jessica Lin
authored
Merge pull request #889 from CamiWilliams/60minblitz-cw-update
60-Min Blitz: What is PyTorch
2 parents 1b42103 + 667dce6 commit 3dd2843

File tree

1 file changed

+117
-45
lines changed

1 file changed

+117
-45
lines changed

beginner_source/blitz/tensor_tutorial.py

Lines changed: 117 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -3,101 +3,147 @@
33
What is PyTorch?
44
================
55
6-
It’s a Python-based scientific computing package targeted at two sets of
6+
It is a open source machine learning framework that accelerates the
7+
path from research prototyping to production deployment.
8+
9+
PyTorch is built as a Python-based scientific computing package targeted at two sets of
710
audiences:
811
9-
- A replacement for NumPy to use the power of GPUs
10-
- a deep learning research platform that provides maximum flexibility
11-
and speed
12+
- Those who are looking for a replacement for NumPy to use the power of GPUs.
13+
- Researchers who want to build with a deep learning platform that provides maximum flexibility
14+
and speed.
1215
1316
Getting Started
1417
---------------
1518
19+
In this section of the tutorial, we will introduce the concept of a tensor in PyTorch, and its operations.
20+
1621
Tensors
1722
^^^^^^^
1823
19-
Tensors are similar to NumPy’s ndarrays, with the addition being that
20-
Tensors can also be used on a GPU to accelerate computing.
24+
A tensor is a generic n-dimensional array. Tensors in PyTorch are similar to NumPy’s ndarrays,
25+
with the addition being that tensors can also be used on a GPU to accelerate computing.
26+
27+
To see the behavior of tensors, we will first need to import PyTorch into our program.
2128
"""
2229

2330
from __future__ import print_function
2431
import torch
2532

26-
###############################################################
27-
# .. note::
28-
# An uninitialized matrix is declared,
29-
# but does not contain definite known
30-
# values before it is used. When an
31-
# uninitialized matrix is created,
32-
# whatever values were in the allocated
33-
# memory at the time will appear as the initial values.
33+
"""
34+
We import ``future`` here to help port our code from Python 2 to Python 3.
35+
For more details, see the `Python-Future technical documentation <https://python-future.org/quickstart.html>`_.
36+
37+
Let's take a look at how we can create tensors.
38+
"""
3439

3540
###############################################################
36-
# Construct a 5x3 matrix, uninitialized:
41+
# First, construct a 5x3 empty matrix:
3742

3843
x = torch.empty(5, 3)
3944
print(x)
45+
46+
"""
47+
``torch.empty`` creates an uninitialized matrix of type tensor.
48+
When an empty tensor is declared, it does not contain definite known values
49+
before you populate it. The values in the empty tensor are those that were in
50+
the allocated memory at the time of initialization.
51+
"""
4052

4153
###############################################################
42-
# Construct a randomly initialized matrix:
54+
# Now, construct a randomly initialized matrix:
4355

4456
x = torch.rand(5, 3)
4557
print(x)
4658

59+
"""
60+
``torch.rand`` creates an initialized matrix of type tensor with a random
61+
sampling of values.
62+
"""
63+
4764
###############################################################
4865
# Construct a matrix filled zeros and of dtype long:
4966

5067
x = torch.zeros(5, 3, dtype=torch.long)
5168
print(x)
5269

70+
"""
71+
``torch.zeros`` creates an initialized matrix of type tensor with every
72+
index having a value of zero.
73+
"""
74+
5375
###############################################################
54-
# Construct a tensor directly from data:
76+
# Let's construct a tensor with data that we define ourselves:
5577

5678
x = torch.tensor([5.5, 3])
5779
print(x)
5880

81+
"""
82+
Our tensor can represent all types of data. This data can be an audio waveform, the
83+
pixels of an image, even entities of a language.
84+
85+
PyTorch has packages that support these specific data types. For additional learning, see:
86+
- `torchvision <https://pytorch.org/docs/stable/torchvision/index.html>`_
87+
- `torchtext <https://pytorch.org/text/>`_
88+
- `torchaudio <https://pytorch.org/audio/>`_
89+
"""
90+
5991
###############################################################
60-
# or create a tensor based on an existing tensor. These methods
61-
# will reuse properties of the input tensor, e.g. dtype, unless
62-
# new values are provided by user
92+
# You can create a tensor based on an existing tensor. These methods
93+
# reuse the properties of the input tensor, e.g. ``dtype``, unless
94+
# new values are provided by the user.
95+
#
6396

64-
x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes
97+
x = x.new_ones(5, 3, dtype=torch.double)
6598
print(x)
6699

67100
x = torch.randn_like(x, dtype=torch.float) # override dtype!
68101
print(x) # result has the same size
69102

103+
"""
104+
``tensor.new_*`` methods take in the size of the tensor and a ``dtype``,
105+
returning a tensor filled with ones.
106+
107+
In this example,``torch.randn_like`` creates a new tensor based upon the
108+
input tensor, and overrides the ``dtype`` to be a float. The output of
109+
this method is a tensor of the same size and different ``dtype``.
110+
"""
111+
70112
###############################################################
71-
# Get its size:
113+
# We can get the size of a tensor as a tuple:
72114

73115
print(x.size())
74116

75117
###############################################################
76118
# .. note::
77-
# ``torch.Size`` is in fact a tuple, so it supports all tuple operations.
119+
# Since ``torch.Size`` is a tuple, it supports all tuple operations.
78120
#
79121
# Operations
80122
# ^^^^^^^^^^
81-
# There are multiple syntaxes for operations. In the following
82-
# example, we will take a look at the addition operation.
123+
# There are multiple syntaxes for operations that can be performed on tensors.
124+
# In the following example, we will take a look at the addition operation.
83125
#
84-
# Addition: syntax 1
126+
# First, let's try using the ``+`` operator.
127+
85128
y = torch.rand(5, 3)
86129
print(x + y)
87130

88131
###############################################################
89-
# Addition: syntax 2
132+
# Using the ``+`` operator should have the same output as using the
133+
# ``add()`` method.
90134

91135
print(torch.add(x, y))
92136

93137
###############################################################
94-
# Addition: providing an output tensor as argument
138+
# You can also provide a tensor as an argument to the ``add()``
139+
# method that will contain the data of the output operation.
140+
95141
result = torch.empty(5, 3)
96142
torch.add(x, y, out=result)
97143
print(result)
98144

99145
###############################################################
100-
# Addition: in-place
146+
# Finally, you can perform this operation in-place.
101147

102148
# adds x to y
103149
y.add_(x)
@@ -107,21 +153,29 @@
107153
# .. note::
108154
# Any operation that mutates a tensor in-place is post-fixed with an ``_``.
109155
# For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.
110-
#
111-
# You can use standard NumPy-like indexing with all bells and whistles!
156+
157+
###############################################################
158+
# Similar to NumPy, tensors can be indexed using the standard
159+
# Python ``x[i]`` syntax, where ``x`` is the array and ``i`` is the selection.
160+
#
161+
# That said, you can use NumPy-like indexing with all its bells and whistles!
112162

113163
print(x[:, 1])
114164

115165
###############################################################
116-
# Resizing: If you want to resize/reshape tensor, you can use ``torch.view``:
166+
# Resizing your tensors might be necessary for your data.
167+
# If you want to resize or reshape tensor, you can use ``torch.view``:
168+
117169
x = torch.randn(4, 4)
118170
y = x.view(16)
119171
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
120172
print(x.size(), y.size(), z.size())
121173

122174
###############################################################
123-
# If you have a one element tensor, use ``.item()`` to get the value as a
124-
# Python number
175+
# You can access the Python number-value of a one-element tensor using ``.item()``.
176+
# If you have a multidimensional tensor, see the
177+
# `tolist() <https://pytorch.org/docs/stable/tensors.html#torch.Tensor.tolist>`_ method.
178+
125179
x = torch.randn(1)
126180
print(x)
127181
print(x.item())
@@ -130,43 +184,55 @@
130184
# **Read later:**
131185
#
132186
#
133-
# 100+ Tensor operations, including transposing, indexing, slicing,
134-
# mathematical operations, linear algebra, random numbers, etc.,
135-
# are described
136-
# `here <https://pytorch.org/docs/torch>`_.
187+
# This was just a sample of the 100+ Tensor operations you have
188+
# access to in PyTorch. There are many others, including transposing,
189+
# indexing, slicing, mathematical operations, linear algebra,
190+
# random numbers, and more. Read and explore more about them in our
191+
# `technical documentation <https://pytorch.org/docs/torch>`_.
137192
#
138193
# NumPy Bridge
139194
# ------------
140195
#
141-
# Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
196+
# As mentioned earlier, one of the benefits of using PyTorch is that it
197+
# is built to provide a seemless transition from NumPy.
198+
#
199+
# For example, converting a Torch Tensor to a NumPy array (and vice versa)
200+
# is a breeze.
142201
#
143202
# The Torch Tensor and NumPy array will share their underlying memory
144-
# locations (if the Torch Tensor is on CPU), and changing one will change
203+
# locations (if the Torch Tensor is on CPU). That means, changing one will change
145204
# the other.
146205
#
206+
# Let's see this in action.
207+
#
147208
# Converting a Torch Tensor to a NumPy Array
148209
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
210+
# First, construct a 1-dimensional tensor populated with ones.
149211

150212
a = torch.ones(5)
151213
print(a)
152214

153215
###############################################################
154-
#
216+
# Now, let's construct a NumPy array based off of that tensor.
155217

156218
b = a.numpy()
157219
print(b)
158220

159221
###############################################################
160-
# See how the numpy array changed in value.
222+
# Let's see how they share their memory locations. Add ``1`` to the torch tensor.
161223

162224
a.add_(1)
163225
print(a)
164226
print(b)
165227

228+
###############################################################
229+
# Take note how the numpy array also changed in value.
230+
166231
###############################################################
167232
# Converting NumPy Array to Torch Tensor
168233
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
169-
# See how changing the np array changed the Torch Tensor automatically
234+
# Try the same thing for NumPy to Torch Tensor.
235+
# See how changing the NumPy array changed the Torch Tensor automatically as well.
170236

171237
import numpy as np
172238
a = np.ones(5)
@@ -176,15 +242,17 @@
176242
print(b)
177243

178244
###############################################################
179-
# All the Tensors on the CPU except a CharTensor support converting to
245+
# All the Tensors on the CPU (except a CharTensor) support converting to
180246
# NumPy and back.
181247
#
182248
# CUDA Tensors
183249
# ------------
184250
#
185251
# Tensors can be moved onto any device using the ``.to`` method.
252+
# The following code block can be run by changing the runtime in
253+
# your notebook to "GPU" or greater.
186254

187-
# let us run this cell only if CUDA is available
255+
# This cell will run only if CUDA is available
188256
# We will use ``torch.device`` objects to move tensors in and out of GPU
189257
if torch.cuda.is_available():
190258
device = torch.device("cuda") # a CUDA device object
@@ -193,3 +261,7 @@
193261
z = x + y
194262
print(z)
195263
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
264+
265+
###############################################################
266+
# Now that you have had time to experiment with Tensors in PyTorch, let's take
267+
# a look at Automatic Differentiation.

0 commit comments

Comments
 (0)