|
3 | 3 | What is PyTorch? |
4 | 4 | ================ |
5 | 5 |
|
6 | | -It’s a Python-based scientific computing package targeted at two sets of |
| 6 | +It is a open source machine learning framework that accelerates the |
| 7 | +path from research prototyping to production deployment. |
| 8 | +
|
| 9 | +PyTorch is built as a Python-based scientific computing package targeted at two sets of |
7 | 10 | audiences: |
8 | 11 |
|
9 | | -- A replacement for NumPy to use the power of GPUs |
10 | | -- a deep learning research platform that provides maximum flexibility |
11 | | - and speed |
| 12 | +- Those who are looking for a replacement for NumPy to use the power of GPUs. |
| 13 | +- Researchers who want to build with a deep learning platform that provides maximum flexibility |
| 14 | + and speed. |
12 | 15 |
|
13 | 16 | Getting Started |
14 | 17 | --------------- |
15 | 18 |
|
| 19 | +In this section of the tutorial, we will introduce the concept of a tensor in PyTorch, and its operations. |
| 20 | +
|
16 | 21 | Tensors |
17 | 22 | ^^^^^^^ |
18 | 23 |
|
19 | | -Tensors are similar to NumPy’s ndarrays, with the addition being that |
20 | | -Tensors can also be used on a GPU to accelerate computing. |
| 24 | +A tensor is a generic n-dimensional array. Tensors in PyTorch are similar to NumPy’s ndarrays, |
| 25 | +with the addition being that tensors can also be used on a GPU to accelerate computing. |
| 26 | +
|
| 27 | +To see the behavior of tensors, we will first need to import PyTorch into our program. |
21 | 28 | """ |
22 | 29 |
|
23 | 30 | from __future__ import print_function |
24 | 31 | import torch |
25 | 32 |
|
26 | | -############################################################### |
27 | | -# .. note:: |
28 | | -# An uninitialized matrix is declared, |
29 | | -# but does not contain definite known |
30 | | -# values before it is used. When an |
31 | | -# uninitialized matrix is created, |
32 | | -# whatever values were in the allocated |
33 | | -# memory at the time will appear as the initial values. |
| 33 | +""" |
| 34 | +We import ``future`` here to help port our code from Python 2 to Python 3. |
| 35 | +For more details, see the `Python-Future technical documentation <https://python-future.org/quickstart.html>`_. |
| 36 | +
|
| 37 | +Let's take a look at how we can create tensors. |
| 38 | +""" |
34 | 39 |
|
35 | 40 | ############################################################### |
36 | | -# Construct a 5x3 matrix, uninitialized: |
| 41 | +# First, construct a 5x3 empty matrix: |
37 | 42 |
|
38 | 43 | x = torch.empty(5, 3) |
39 | 44 | print(x) |
| 45 | + |
| 46 | +""" |
| 47 | +``torch.empty`` creates an uninitialized matrix of type tensor. |
| 48 | +When an empty tensor is declared, it does not contain definite known values |
| 49 | +before you populate it. The values in the empty tensor are those that were in |
| 50 | +the allocated memory at the time of initialization. |
| 51 | +""" |
40 | 52 |
|
41 | 53 | ############################################################### |
42 | | -# Construct a randomly initialized matrix: |
| 54 | +# Now, construct a randomly initialized matrix: |
43 | 55 |
|
44 | 56 | x = torch.rand(5, 3) |
45 | 57 | print(x) |
46 | 58 |
|
| 59 | +""" |
| 60 | +``torch.rand`` creates an initialized matrix of type tensor with a random |
| 61 | +sampling of values. |
| 62 | +""" |
| 63 | + |
47 | 64 | ############################################################### |
48 | 65 | # Construct a matrix filled zeros and of dtype long: |
49 | 66 |
|
50 | 67 | x = torch.zeros(5, 3, dtype=torch.long) |
51 | 68 | print(x) |
52 | 69 |
|
| 70 | +""" |
| 71 | +``torch.zeros`` creates an initialized matrix of type tensor with every |
| 72 | +index having a value of zero. |
| 73 | +""" |
| 74 | + |
53 | 75 | ############################################################### |
54 | | -# Construct a tensor directly from data: |
| 76 | +# Let's construct a tensor with data that we define ourselves: |
55 | 77 |
|
56 | 78 | x = torch.tensor([5.5, 3]) |
57 | 79 | print(x) |
58 | 80 |
|
| 81 | +""" |
| 82 | +Our tensor can represent all types of data. This data can be an audio waveform, the |
| 83 | +pixels of an image, even entities of a language. |
| 84 | +
|
| 85 | +PyTorch has packages that support these specific data types. For additional learning, see: |
| 86 | +- `torchvision <https://pytorch.org/docs/stable/torchvision/index.html>`_ |
| 87 | +- `torchtext <https://pytorch.org/text/>`_ |
| 88 | +- `torchaudio <https://pytorch.org/audio/>`_ |
| 89 | +""" |
| 90 | + |
59 | 91 | ############################################################### |
60 | | -# or create a tensor based on an existing tensor. These methods |
61 | | -# will reuse properties of the input tensor, e.g. dtype, unless |
62 | | -# new values are provided by user |
| 92 | +# You can create a tensor based on an existing tensor. These methods |
| 93 | +# reuse the properties of the input tensor, e.g. ``dtype``, unless |
| 94 | +# new values are provided by the user. |
| 95 | +# |
63 | 96 |
|
64 | | -x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes |
| 97 | +x = x.new_ones(5, 3, dtype=torch.double) |
65 | 98 | print(x) |
66 | 99 |
|
67 | 100 | x = torch.randn_like(x, dtype=torch.float) # override dtype! |
68 | 101 | print(x) # result has the same size |
69 | 102 |
|
| 103 | +""" |
| 104 | +``tensor.new_*`` methods take in the size of the tensor and a ``dtype``, |
| 105 | +returning a tensor filled with ones. |
| 106 | +
|
| 107 | +In this example,``torch.randn_like`` creates a new tensor based upon the |
| 108 | +input tensor, and overrides the ``dtype`` to be a float. The output of |
| 109 | +this method is a tensor of the same size and different ``dtype``. |
| 110 | +""" |
| 111 | + |
70 | 112 | ############################################################### |
71 | | -# Get its size: |
| 113 | +# We can get the size of a tensor as a tuple: |
72 | 114 |
|
73 | 115 | print(x.size()) |
74 | 116 |
|
75 | 117 | ############################################################### |
76 | 118 | # .. note:: |
77 | | -# ``torch.Size`` is in fact a tuple, so it supports all tuple operations. |
| 119 | +# Since ``torch.Size`` is a tuple, it supports all tuple operations. |
78 | 120 | # |
79 | 121 | # Operations |
80 | 122 | # ^^^^^^^^^^ |
81 | | -# There are multiple syntaxes for operations. In the following |
82 | | -# example, we will take a look at the addition operation. |
| 123 | +# There are multiple syntaxes for operations that can be performed on tensors. |
| 124 | +# In the following example, we will take a look at the addition operation. |
83 | 125 | # |
84 | | -# Addition: syntax 1 |
| 126 | +# First, let's try using the ``+`` operator. |
| 127 | + |
85 | 128 | y = torch.rand(5, 3) |
86 | 129 | print(x + y) |
87 | 130 |
|
88 | 131 | ############################################################### |
89 | | -# Addition: syntax 2 |
| 132 | +# Using the ``+`` operator should have the same output as using the |
| 133 | +# ``add()`` method. |
90 | 134 |
|
91 | 135 | print(torch.add(x, y)) |
92 | 136 |
|
93 | 137 | ############################################################### |
94 | | -# Addition: providing an output tensor as argument |
| 138 | +# You can also provide a tensor as an argument to the ``add()`` |
| 139 | +# method that will contain the data of the output operation. |
| 140 | + |
95 | 141 | result = torch.empty(5, 3) |
96 | 142 | torch.add(x, y, out=result) |
97 | 143 | print(result) |
98 | 144 |
|
99 | 145 | ############################################################### |
100 | | -# Addition: in-place |
| 146 | +# Finally, you can perform this operation in-place. |
101 | 147 |
|
102 | 148 | # adds x to y |
103 | 149 | y.add_(x) |
|
107 | 153 | # .. note:: |
108 | 154 | # Any operation that mutates a tensor in-place is post-fixed with an ``_``. |
109 | 155 | # For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``. |
110 | | -# |
111 | | -# You can use standard NumPy-like indexing with all bells and whistles! |
| 156 | + |
| 157 | +############################################################### |
| 158 | +# Similar to NumPy, tensors can be indexed using the standard |
| 159 | +# Python ``x[i]`` syntax, where ``x`` is the array and ``i`` is the selection. |
| 160 | +# |
| 161 | +# That said, you can use NumPy-like indexing with all its bells and whistles! |
112 | 162 |
|
113 | 163 | print(x[:, 1]) |
114 | 164 |
|
115 | 165 | ############################################################### |
116 | | -# Resizing: If you want to resize/reshape tensor, you can use ``torch.view``: |
| 166 | +# Resizing your tensors might be necessary for your data. |
| 167 | +# If you want to resize or reshape tensor, you can use ``torch.view``: |
| 168 | + |
117 | 169 | x = torch.randn(4, 4) |
118 | 170 | y = x.view(16) |
119 | 171 | z = x.view(-1, 8) # the size -1 is inferred from other dimensions |
120 | 172 | print(x.size(), y.size(), z.size()) |
121 | 173 |
|
122 | 174 | ############################################################### |
123 | | -# If you have a one element tensor, use ``.item()`` to get the value as a |
124 | | -# Python number |
| 175 | +# You can access the Python number-value of a one-element tensor using ``.item()``. |
| 176 | +# If you have a multidimensional tensor, see the |
| 177 | +# `tolist() <https://pytorch.org/docs/stable/tensors.html#torch.Tensor.tolist>`_ method. |
| 178 | + |
125 | 179 | x = torch.randn(1) |
126 | 180 | print(x) |
127 | 181 | print(x.item()) |
|
130 | 184 | # **Read later:** |
131 | 185 | # |
132 | 186 | # |
133 | | -# 100+ Tensor operations, including transposing, indexing, slicing, |
134 | | -# mathematical operations, linear algebra, random numbers, etc., |
135 | | -# are described |
136 | | -# `here <https://pytorch.org/docs/torch>`_. |
| 187 | +# This was just a sample of the 100+ Tensor operations you have |
| 188 | +# access to in PyTorch. There are many others, including transposing, |
| 189 | +# indexing, slicing, mathematical operations, linear algebra, |
| 190 | +# random numbers, and more. Read and explore more about them in our |
| 191 | +# `technical documentation <https://pytorch.org/docs/torch>`_. |
137 | 192 | # |
138 | 193 | # NumPy Bridge |
139 | 194 | # ------------ |
140 | 195 | # |
141 | | -# Converting a Torch Tensor to a NumPy array and vice versa is a breeze. |
| 196 | +# As mentioned earlier, one of the benefits of using PyTorch is that it |
| 197 | +# is built to provide a seemless transition from NumPy. |
| 198 | +# |
| 199 | +# For example, converting a Torch Tensor to a NumPy array (and vice versa) |
| 200 | +# is a breeze. |
142 | 201 | # |
143 | 202 | # The Torch Tensor and NumPy array will share their underlying memory |
144 | | -# locations (if the Torch Tensor is on CPU), and changing one will change |
| 203 | +# locations (if the Torch Tensor is on CPU). That means, changing one will change |
145 | 204 | # the other. |
146 | 205 | # |
| 206 | +# Let's see this in action. |
| 207 | +# |
147 | 208 | # Converting a Torch Tensor to a NumPy Array |
148 | 209 | # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 210 | +# First, construct a 1-dimensional tensor populated with ones. |
149 | 211 |
|
150 | 212 | a = torch.ones(5) |
151 | 213 | print(a) |
152 | 214 |
|
153 | 215 | ############################################################### |
154 | | -# |
| 216 | +# Now, let's construct a NumPy array based off of that tensor. |
155 | 217 |
|
156 | 218 | b = a.numpy() |
157 | 219 | print(b) |
158 | 220 |
|
159 | 221 | ############################################################### |
160 | | -# See how the numpy array changed in value. |
| 222 | +# Let's see how they share their memory locations. Add ``1`` to the torch tensor. |
161 | 223 |
|
162 | 224 | a.add_(1) |
163 | 225 | print(a) |
164 | 226 | print(b) |
165 | 227 |
|
| 228 | +############################################################### |
| 229 | +# Take note how the numpy array also changed in value. |
| 230 | + |
166 | 231 | ############################################################### |
167 | 232 | # Converting NumPy Array to Torch Tensor |
168 | 233 | # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
169 | | -# See how changing the np array changed the Torch Tensor automatically |
| 234 | +# Try the same thing for NumPy to Torch Tensor. |
| 235 | +# See how changing the NumPy array changed the Torch Tensor automatically as well. |
170 | 236 |
|
171 | 237 | import numpy as np |
172 | 238 | a = np.ones(5) |
|
176 | 242 | print(b) |
177 | 243 |
|
178 | 244 | ############################################################### |
179 | | -# All the Tensors on the CPU except a CharTensor support converting to |
| 245 | +# All the Tensors on the CPU (except a CharTensor) support converting to |
180 | 246 | # NumPy and back. |
181 | 247 | # |
182 | 248 | # CUDA Tensors |
183 | 249 | # ------------ |
184 | 250 | # |
185 | 251 | # Tensors can be moved onto any device using the ``.to`` method. |
| 252 | +# The following code block can be run by changing the runtime in |
| 253 | +# your notebook to "GPU" or greater. |
186 | 254 |
|
187 | | -# let us run this cell only if CUDA is available |
| 255 | +# This cell will run only if CUDA is available |
188 | 256 | # We will use ``torch.device`` objects to move tensors in and out of GPU |
189 | 257 | if torch.cuda.is_available(): |
190 | 258 | device = torch.device("cuda") # a CUDA device object |
|
193 | 261 | z = x + y |
194 | 262 | print(z) |
195 | 263 | print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together! |
| 264 | + |
| 265 | +############################################################### |
| 266 | +# Now that you have had time to experiment with Tensors in PyTorch, let's take |
| 267 | +# a look at Automatic Differentiation. |
0 commit comments