You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is beneficial to zero out gradients when building a neural network. This is because by default, gradients are accumulated in buffers (i.e, not overwritten) whenever ``.backward()`` is called.
3
+
================================
4
+
It is beneficial to zero out gradients when building a neural network.
5
+
This is because by default, gradients are accumulated in buffers (i.e,
6
+
not overwritten) whenever ``.backward()`` is called.
12
7
13
8
Introduction
14
-
---
15
-
When training your neural network, models are able to increase their accuracy through gradient decent. In short, gradient descent is the process of minimizing our loss (or error) by tweaking the weights and biases in our model.
9
+
------------
10
+
When training your neural network, models are able to increase their
11
+
accuracy through gradient decent. In short, gradient descent is the
12
+
process of minimizing our loss (or error) by tweaking the weights and
13
+
biases in our model.
14
+
15
+
``torch.Tensor`` is the central class of PyTorch. When you create a
16
+
tensor, if you set its attribute ``.requires_grad`` as ``True``, the
17
+
package tracks all operations on it. This happens on subsequent backward
18
+
passes. The gradient for this tensor will be accumulated into ``.grad``
19
+
attribute. The accumulation (or sum) of all the gradients is calculated
20
+
when .backward() is called on the loss tensor.
21
+
22
+
There are cases where it may be necessary to zero-out the gradients of a
23
+
tensor. For example: when you start your training loop, you should zero
24
+
out the gradients so that you can perform this tracking correctly.
25
+
In this recipe, we will learn how to zero out gradients using the
26
+
PyTorch library. We will demonstrate how to do this by training a neural
27
+
network on the ``CIFAR10`` dataset built into PyTorch.
16
28
17
-
``torch.Tensor`` is the central class of PyTorch. When you create a tensor,
18
-
if you set its attribute ``.requires_grad`` as ``True``, the package tracks all operations on it. This happens on subsequent backward passes. The gradient for this tensor will be accumulated into ``.grad`` attribute. The accumulation (or sum) of all the gradients is calculated when .backward() is called on the loss tensor.
29
+
Setup
30
+
-----
31
+
Since we will be training data in this recipe, if you are in a runable
32
+
notebook, it is best to switch the runtime to GPU or TPU.
33
+
Before we begin, we need to install ``torch`` and ``torchvision`` if
34
+
they aren’t already available.
19
35
20
-
There are cases where it may be necessary to zero-out the gradients of a tensor. For example: when you start your training loop, you should zero out the gradients so that you can perform this tracking correctly.
36
+
::
21
37
22
-
In this recipe, we will learn how to zero out gradients using the PyTorch library. We will demonstrate how to do this by training a neural network on the ``CIFAR10`` dataset built into PyTorch.
38
+
pip install torchvision
23
39
24
-
Setup
25
-
---
26
-
Since we will be training data in this recipe, if you are in a runable notebook, it is best to switch the runtime to GPU or TPU.
27
40
28
-
Before we begin, we need to install ``torch`` and ``torchvision`` if they aren't already available.
29
41
"""
30
42
31
-
pipinstalltorchvision
32
-
33
-
"""Steps
34
-
-----------------
35
-
Steps 1 through 4 set up our data and neural network for training. The process of zeroing out the gradients happens in step 5. If you already have your data and neural network built, skip to 5.
36
43
37
-
1. Import all necessary libraries for loading our data
38
-
2. Load and normalize the dataset
39
-
3. Build the neural network
40
-
4. Define the loss function
41
-
5. Zero the gradients while training the network
42
-
43
-
### **1) Import necessary libraries for loading our data**
44
-
For this recipe, we will just be using ``torch`` and ``torchvision`` to access the dataset.
"""### **5) Zero the gradients while training the network**
110
-
This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize.
111
141
112
-
Notice that for each entity of data, we zero out the gradients. This is to ensure that we aren't tracking any unnecessary information when we train our neural network.
# This is when things start to get interesting. We simply have to loop
147
+
# over our data iterator, and feed the inputs to the network and optimize.
148
+
#
149
+
# Notice that for each entity of data, we zero out the gradients. This is
150
+
# to ensure that we aren’t tracking any unnecessary information when we
151
+
# train our neural network.
152
+
#
114
153
115
154
forepochinrange(2): # loop over the dataset multiple times
116
155
@@ -137,14 +176,18 @@ def forward(self, x):
137
176
138
177
print('Finished Training')
139
178
140
-
"""You can also use ``model.zero_grad()``. This is the same as using ``optimizer.zero_grad()`` as long as all your model parameters are in that optimizer. Use your best judgement to decide which one to use.
141
-
142
-
Congratulations! You have successfully zeroed out gradients PyTorch.
143
-
144
-
Learn More
145
-
----------------------------
146
-
Take a look at these other recipes to continue your learning:
0 commit comments