Skip to content

Commit 7e9180a

Browse files
author
Svetlana Karslioglu
authored
Remove links to torchvision finetuning tutorial (#2199)
1 parent 65a1b64 commit 7e9180a

File tree

2 files changed

+1
-6
lines changed

2 files changed

+1
-6
lines changed

beginner_source/basics/autogradqs_tutorial.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -130,9 +130,7 @@
130130

131131
######################################################################
132132
# There are reasons you might want to disable gradient tracking:
133-
# - To mark some parameters in your neural network as **frozen parameters**. This is
134-
# a very common scenario for
135-
# `finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
133+
# - To mark some parameters in your neural network as **frozen parameters**.
136134
# - To **speed up computations** when you are only doing forward pass, because computations on tensors that do
137135
# not track gradients would be more efficient.
138136

beginner_source/blitz/autograd_tutorial.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -276,9 +276,6 @@
276276
# It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters
277277
# (this offers some performance benefits by reducing autograd computations).
278278
#
279-
# Another common usecase where exclusion from the DAG is important is for
280-
# `finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
281-
#
282279
# In finetuning, we freeze most of the model and typically only modify the classifier layers to make predictions on new labels.
283280
# Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.
284281

0 commit comments

Comments
 (0)