-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
🚀 Feature
LightningLite has a method setup which wraps the model:
model, optimizer = self.setup(model, optimizer)Here model is an instance of _LiteModule and to access the attributes of the original model, you would do
model.module.method_a() instead of model.method_a().
However, depending on the Strategy, the model may be wrapped by other modules like DistributedDataParallel for example.
Motivation
Normally one only calls the model through forward. However, in some cases it is useful to expose extra method on the model that don't do distributed communication.
Pitch
Make _LiteModule.module return the original unwrapped model, as it was passed into the setup() method.
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.