Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions beginner_source/basics/autogradqs_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,9 +130,7 @@

######################################################################
# There are reasons you might want to disable gradient tracking:
# - To mark some parameters in your neural network as **frozen parameters**. This is
# a very common scenario for
# `finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
# - To mark some parameters in your neural network as **frozen parameters**.
# - To **speed up computations** when you are only doing forward pass, because computations on tensors that do
# not track gradients would be more efficient.

Expand Down
3 changes: 0 additions & 3 deletions beginner_source/blitz/autograd_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -276,9 +276,6 @@
# It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters
# (this offers some performance benefits by reducing autograd computations).
#
# Another common usecase where exclusion from the DAG is important is for
# `finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
#
# In finetuning, we freeze most of the model and typically only modify the classifier layers to make predictions on new labels.
# Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.

Expand Down