-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
🚀 Feature
Remove this two arguments from the Trainer constructor
Motivation
We are auditing the Lightning components and APIs to assess opportunities for improvements:
- https://docs.google.com/document/d/1xHU7-iQSpp9KJTjI3As2EM0mfNHHr37WZYpDpwLkivA/edit#
- Review Lightning architecture & API #7740
The Trainer today has over 50 arguments to its constructor. This number is growing with each feature release, and makes the trainer cluttered. It also hurts the extensibility of the trainer: a number of arguments passed to the Trainer are to customize other utilities. Plumbing arguments through the trainer creates an undesirable coupling: when the underlying components change, the framework is forced to make breaking API changes in at least two places:
- The underlying component
- The trainer constructor
Example: #8062
- we now have to change the trainer API in order to address
weights_summaryimplementation changes (or deprecateweights_summaryoff the constructor in favor of a callback: Deprecate summarize() off LightningModule #8478 )
Example: #8780
-log_gpu_memory accepting min_max is hyper-specific to nvidia-smi, and isn't applicable for torch.cuda.memory stats
Upcoming examples:
- Supporting new types of progress bars feat: Add Rich Progress Bar #8929
- But our current trainer arguments for progress bars could be specific to the TQDM implementation
Pitch
-
Deprecate
process_positionoff the Trainer constructor in v1.5 -
In version 1.7, remove
process_positionfrom the Trainer entirely
To customize this, users can still construct the ProgressBar callback object and pass it to the Trainer.
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
-
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
-
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.