-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workinghelp wantedOpen to be worked onOpen to be worked on
Description
🐛 Bug
The Trainer expects the LightningModule to have self.batch_size (see scale_batch_size() in training_tricks.py). However, if one is using the new LightningDataModule, that should be the class with self.batch_size defined.
To Reproduce
assert hasattr(lightning_data_module, "batch_size")
trainer = Trainer(auto_scale_batch_size=True)
trainer.fit(lightning_module, datamodule=lightning_data_module)pytorch_lightning.utilities.exceptions.MisconfigurationException: Field batch_size not found in both `model` and `model.hparams`Expected behavior
auto_scale_batch_size should work using LightningDataModule
Environment
* Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.1rc1
- tensorboard: 2.2.0
- tqdm: 4.48.2
sumanthratna, kormalev, adam-mehdi, benjats07, ruro and 4 more
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workinghelp wantedOpen to be worked onOpen to be worked on