Skip to content
This repository was archived by the owner on Aug 28, 2025. It is now read-only.

Commit a2617fa

Browse files
speediedanBordarohitgr7
authored
Minor Finetuning Scheduler Tutorial Update (#176)
* update fts link, add advanced feature ref, cleanup depth logging * recheck with updated fts deps * Apply suggestions from code review Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: Rohit Gupta <[email protected]>
1 parent aa39ef1 commit a2617fa

File tree

2 files changed

+12
-5
lines changed

2 files changed

+12
-5
lines changed

lightning_examples/finetuning-scheduler/.meta.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
title: Finetuning Scheduler
22
author: "[Dan Dale](https://github.com/speediedan)"
33
created: 2021-11-29
4-
updated: 2022-05-10
4+
updated: 2022-06-10
55
license: CC BY-SA
6-
build: 3
6+
build: 0
77
tags:
8-
- finetuning
8+
- Finetuning
99
description: |
1010
This notebook introduces the [Finetuning Scheduler](https://finetuning-scheduler.readthedocs.io/en/stable/index.html) extension
1111
and demonstrates the use of it to finetune a small foundational model on the

lightning_examples/finetuning-scheduler/finetuning-scheduler.py

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -153,6 +153,8 @@
153153
# - ``DDP_SHARDED``
154154
# - ``DDP_SHARDED_SPAWN``
155155
#
156+
# Custom or officially unsupported strategies can be used by setting [FinetuningScheduler.allow_untested](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html?highlight=allow_untested#finetuning_scheduler.fts.FinetuningScheduler.params.allow_untested) to ``True``.
157+
# Note that most currently unsupported strategies are so because they require varying degrees of modification to be compatible (e.g. ``deepspeed`` requires an ``add_param_group`` method, ``tpu_spawn`` an override of the current broadcast method to include python objects)
156158
# </div>
157159

158160
# %% [markdown]
@@ -387,9 +389,12 @@ def training_step(self, batch, batch_idx):
387389
self.log("train_loss", loss)
388390
return loss
389391

390-
def training_epoch_end(self, outputs: List[Any]) -> None:
392+
def on_train_epoch_start(self) -> None:
391393
if self.finetuningscheduler_callback:
392-
self.log("finetuning_schedule_depth", float(self.finetuningscheduler_callback.curr_depth))
394+
self.logger.log_metrics(
395+
metrics={"finetuning_schedule_depth": float(self.finetuningscheduler_callback.curr_depth)},
396+
step=self.global_step,
397+
)
393398

394399
def validation_step(self, batch, batch_idx, dataloader_idx=0):
395400
outputs = self(**batch)
@@ -524,6 +529,8 @@ def configure_optimizers(self):
524529
# used in other pytorch-lightning tutorials) also work with FinetuningScheduler. Though the LR scheduler is theoretically
525530
# justified [(Loshchilov & Hutter, 2016)](#f4), the particular values provided here are primarily empircally driven.
526531
#
532+
# [FinetuningScheduler](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html#finetuning_scheduler.fts.FinetuningScheduler) also supports LR scheduler
533+
# reinitialization in both explicit and implicit finetuning schedule modes. See the [advanced usage documentation](https://finetuning-scheduler.readthedocs.io/en/stable/advanced/lr_scheduler_reinitialization.html) for explanations and demonstration of the extension's support for more complex requirements.
527534
# </div>
528535

529536

0 commit comments

Comments
 (0)