-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Proposed refactoring or deprecation
Remove the following code: a979944
Motivation
The running loss is a running window of loss values returned by the training_step. It has been present since the very beginning of Lightning and has become legacy code.
Problems:
- Users are sometimes confused by the value when they don't know it's a running window and compare it to the actual
lossvalue theyself.loged. - Often users
self.logtheir actual loss which makes them see two "loss" values in the progress bar. - To disable it, you have to override the
get_progress_bar_dicthook which is inconvenient. - The running window configuration is opaque to the user as it's hard-coded in the
TrainingBatchLoop.__init__.
Alternative:
- Can be entirely replaced by asking users to call
self.log("loss", loss, prog_bar=True) - If the users still want to keep the "value window" functionality, this could be done by logging a
torchmetrics.Metricspecialized for it. (is there a Metric to replace theTensorRunningAccumalready? cc @justusschock @awaelchli @akihironitta @rohitgr7 @SeanNaren @kaushikb11 @SkafteNicki)
Pitch
Remove the code, I don't think there's anything to deprecate here.
get_progress_bar_dictstays for thev_numandsplit_idx.- The
TrainingBatchLoop.{accumulated,running}_lossattributes should be private. - The
FitLoop.running_lossproperty seems to be there only for theTunerand could be considered private: https://grep.app/search?q=fit_loop.running_loss - No project seems to be using the
TensorRunningAccum: https://grep.app/search?q=TensorRunningAccum
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
-
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
-
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.