Skip to content

[RFC] Remove {running,accumulated}_loss #9372

@carmocca

Description

@carmocca

Proposed refactoring or deprecation

Remove the following code: a979944

Motivation

The running loss is a running window of loss values returned by the training_step. It has been present since the very beginning of Lightning and has become legacy code.

Problems:

  • Users are sometimes confused by the value when they don't know it's a running window and compare it to the actual loss value they self.loged.
  • Often users self.log their actual loss which makes them see two "loss" values in the progress bar.
  • To disable it, you have to override the get_progress_bar_dict hook which is inconvenient.
  • The running window configuration is opaque to the user as it's hard-coded in the TrainingBatchLoop.__init__.

Alternative:

Pitch

Remove the code, I don't think there's anything to deprecate here.

cc @awaelchli @ananthsub


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning

  • Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

  • Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Metadata

Metadata

Type

No type

Projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions