From 343c9aa8280cbcbab5c55bffb57248a8b5d253ac Mon Sep 17 00:00:00 2001 From: Svetlana Karslioglu Date: Wed, 8 Feb 2023 10:02:09 -0800 Subject: [PATCH] Remove links to torchvision finetuning tutorial --- beginner_source/basics/autogradqs_tutorial.py | 4 +--- beginner_source/blitz/autograd_tutorial.py | 3 --- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/beginner_source/basics/autogradqs_tutorial.py b/beginner_source/basics/autogradqs_tutorial.py index ef05ad4aaa6..d8b53d6175b 100644 --- a/beginner_source/basics/autogradqs_tutorial.py +++ b/beginner_source/basics/autogradqs_tutorial.py @@ -130,9 +130,7 @@ ###################################################################### # There are reasons you might want to disable gradient tracking: -# - To mark some parameters in your neural network as **frozen parameters**. This is -# a very common scenario for -# `finetuning a pretrained network `__ +# - To mark some parameters in your neural network as **frozen parameters**. # - To **speed up computations** when you are only doing forward pass, because computations on tensors that do # not track gradients would be more efficient. diff --git a/beginner_source/blitz/autograd_tutorial.py b/beginner_source/blitz/autograd_tutorial.py index 67336be7fa1..5c0ce2d0a7b 100644 --- a/beginner_source/blitz/autograd_tutorial.py +++ b/beginner_source/blitz/autograd_tutorial.py @@ -276,9 +276,6 @@ # It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters # (this offers some performance benefits by reducing autograd computations). # -# Another common usecase where exclusion from the DAG is important is for -# `finetuning a pretrained network `__ -# # In finetuning, we freeze most of the model and typically only modify the classifier layers to make predictions on new labels. # Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.