Skip to content

Avoid patching LightningModule methods during training #6030

@awaelchli

Description

@awaelchli

🚀 Feature

Can we implement the dataloaders without 🐒-patching the methods in LightningModule?

Motivation

Currently, we patch the LightningModule methods in the trainer when also a DataModule is used.
https://github.com/PyTorchLightning/pytorch-lightning/blob/5157ba55095a6a9f93ec1976aac877c87b00158f/pytorch_lightning/trainer/connectors/data_connector.py#L115

A datamodule's dataloader methods have precedence over the once defined in the LightningModule, but the LightningModule code should not be altered. The user does not know that this happens, and after training is complete, the user may wishes to continue using the model instance.

Pitch

Store the dataloader references in the trainer (or data connector) directly, without "attaching" them to the user's model.
This would also enable typing inference as mentioned by @gianscarpe.

Alternatives

Keep as is, but user will not be happy.
It's also harder to debug the way it is right now.

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions