Skip to content

Deprecate and remove on_epoch_start/end and on_batch_start/end hooks #10807

@rohitgr7

Description

@rohitgr7

Proposed refactor

I propose to deprecate and remove on_epoch_start/end and on_batch_start/end hooks.

Motivation

  • on_epoch_start/on_epoch_end: These 2 hooks runs within each mode train/val/test currently. We already have on_train_epoch_start/on_val_epoch_start/on_test_epoch_start. The reason why we kept them to have a common hook that is called in each mode is so that users can configure operations required to be done within each mode. But I think this is more of a specific application/use-case and can be configured easily by the user without this hook. Also, it can be confusing when referring to val/test because epoch doesn't mean anything during evaluation. Also, there are other mode-specific hooks (actually many) that don't have a special version of a hook that runs for all modes. For eg. we don't have any separate on_start/on_end hooks or on_dataloader hook that run along with on_{train/val/test}_start/end so why special treatment here?

  • on_batch_start/on_batch_end: They run along with on_train_batch_start/on_train_batch_end and don't provide any other significance so we should remove them as well. We can make them run within each mode but then again the same points from above, do we even need them?

History:
all these hooks seem to be added at the inception, the best I could find is this PR which is very old.
the behavior that enabled on_epoch_start/end was I think discussed/approved over slack and I added them 😅 a while back: #6498

Pitch

Simply deprecate and remove.

Additional context


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

cc @tchaton @carmocca @awaelchli @Borda @ninginthecloud @justusschock @akihironitta

Metadata

Metadata

Assignees

No one assigned

    Labels

    deprecationIncludes a deprecationhooksRelated to the hooks API

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions