-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
🚀 Feature
Deprecate flush_logs_every_n_steps from Trainer and make it available as a parameter to loggers that have this capability.
Motivation
We are auditing the Lightning components and APIs to assess opportunities for improvements:
- https://docs.google.com/document/d/1xHU7-iQSpp9KJTjI3As2EM0mfNHHr37WZYpDpwLkivA/edit#
- Review Lightning architecture & API #7740
Flushing should be considered an internal implementation detail of each logger. For example, TensorBoard automatically flush logs after a given amount of time (flush_secs).
Currently, flushing logs is configured through Trainer, which seems like the wrong level of abstraction. Setting flush_logs_every_n_steps given a TensorBoard logger doesn’t actually flush to disk, but calls log_metrics, which can be misleading to the user.
Prior issue: #4664
Pitch
Deprecate flush_logs_every_n_steps from Trainer, and move it to the init for logger classes that support this functionality (e.g. CSVLogger).
The logger connector already passes in the step information (self.trainer.logger.agg_and_log_metrics(scalar_metrics, step=step)) so we can move the flushing logic to a utility function.
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
-
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
-
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.