-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Proposed refactoring or deprecation
Set our test suite to fail when a DeprecationWarning is raised.
Motivation
Since we don't track warnings raised by our tests, it's too easy to introduce a deprecation path but forget to update all deprecated usages inside our codebase and tests.
This is very bad in terms of UX because the users will see this warning appear but won't be able to do anything to fix it.
This has been exacerbated lately with the number of deprecations happening.
For example consider this simple training script:
import torch
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning import LightningModule, Trainer, LightningDataModule
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss)
def test_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("test_loss", loss)
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
class BoringDataModule(LightningDataModule):
def train_dataloader(self):
return DataLoader(RandomDataset(32, 64), batch_size=2)
def val_dataloader(self):
return DataLoader(RandomDataset(32, 64), batch_size=2)
def test_dataloader(self):
return DataLoader(RandomDataset(32, 64), batch_size=2)
def predict_dataloader(self):
return DataLoader(RandomDataset(32, 64), batch_size=2)
def run():
model = BoringModel()
datamodule = BoringDataModule()
trainer = Trainer(
limit_train_batches=1,
limit_val_batches=1,
limit_test_batches=1,
limit_predict_batches=1,
max_epochs=1,
)
trainer.fit(model, datamodule)
trainer.test(verbose=False)
trainer.predict()
if __name__ == "__main__":
run()Deprecation warnings in 1.4.9:
pytorch_lightning/core/datamodule.py:423: LightningDeprecationWarning: DataModule.prepare_data has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.prepare_data. Deprecation warnings in current master:
pytorch_lightning/accelerators/accelerator.py:590: LightningDeprecationWarning: `Accelerator.on_validation_start` is deprecated in v1.5 and will be removed in v1.6. `on_validation_start` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:295: LightningDeprecationWarning: `Accelerator.validation_step_end` is deprecated in v1.5 and will be removed in v1.6. `validation_step_end` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:635: LightningDeprecationWarning: `Accelerator.on_validation_end` is deprecated in v1.5 and will be removed in v1.6. `on_validation_end` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:696: LightningDeprecationWarning: `Accelerator.on_train_batch_start` is deprecated in v1.5 and will be removed in v1.6. `on_train_batch_start` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:263: LightningDeprecationWarning: `Accelerator.training_step_end` is deprecated in v1.5 and will be removed in v1.6. `training_step_end` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:327: LightningDeprecationWarning: The property `LoggerConnector.gpus_metrics` was deprecated in v1.5 and will be removed in 1.7. Use the `DeviceStatsMonitor` callback instead.
pytorch_lightning/accelerators/accelerator.py:680: LightningDeprecationWarning: `Accelerator.on_train_end` is deprecated in v1.5 and will be removed in v1.6. `on_train_end` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:605: LightningDeprecationWarning: `Accelerator.on_test_start` is deprecated in v1.5 and will be removed in v1.6. `on_test_start` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:279: LightningDeprecationWarning: `Accelerator.test_step_end` is deprecated in v1.5 and will be removed in v1.6. `test_step_end` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:650: LightningDeprecationWarning: `Accelerator.on_test_end` is deprecated in v1.5 and will be removed in v1.6. `on_test_end` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:620: LightningDeprecationWarning: `Accelerator.on_predict_start` is deprecated in v1.5 and will be removed in v1.6. `on_predict_start` logic is implemented directly in the `TrainingTypePlugin` implementations.
pytorch_lightning/accelerators/accelerator.py:665: LightningDeprecationWarning: `Accelerator.on_predict_end` is deprecated in v1.5 and will be removed in v1.6. `on_predict_end` logic is implemented directly in the `TrainingTypePlugin` implementations.Pitch
Set
filterwarnings =
# error out on deprecation warnings - ensures the code and tests are kept up-to-date
error::DeprecationWarning
# TensorBoard is using NumPy deprecations: ignore them
ignore::DeprecationWarning:tensorboard.*and fill all deprecated usages. This will need to be done over several PRs
If the test writer actually wants to test that a deprecation is raised, pytest.deprecated_call(...) will need to be used to catch it.
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
-
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
-
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.