-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
Milestone
Description
🐛 Bug
After upgrading to pytorch lightning 1.7 instantiating WandbLogger with mode="disabled" throws the following error:
Traceback (most recent call last):
File "/Users/stephan/Library/Mobile Documents/com~apple~CloudDocs/Ablage/AI Master/Courses/2022S/Master Thesis/molgen/src/playground/bug_report_model.py", line 70, in <module>
run()
File "/Users/stephan/Library/Mobile Documents/com~apple~CloudDocs/Ablage/AI Master/Courses/2022S/Master Thesis/molgen/src/playground/bug_report_model.py", line 48, in run
wandb_logger = WandbLogger(mode="disabled") # <== Added
File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/wandb.py", line 315, in __init__
_ = self.experiment
File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/logger.py", line 54, in experiment
return get_experiment() or DummyExperiment()
File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/utilities/rank_zero.py", line 32, in wrapped_fn
return fn(*args, **kwargs)
File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/logger.py", line 52, in get_experiment
return fn(self)
File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/wandb.py", line 368, in experiment
assert isinstance(self._experiment, Run)
AssertionErrorTo Reproduce
import os
import torch
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.loggers import WandbLogger # <== Added
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss)
def test_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("test_loss", loss)
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
def run():
wandb_logger = WandbLogger(mode="disabled") # <== Added
train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
test_data = DataLoader(RandomDataset(32, 64), batch_size=2)
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
limit_test_batches=1,
num_sanity_val_steps=0,
max_epochs=1,
enable_model_summary=False,
)
trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
trainer.test(model, dataloaders=test_data)
if __name__ == "__main__":
run()Expected behavior
No error, pass parameter to wandb (as it did until 1.6.5)
Environment
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- lightning: None
- lightning_app: None
- numpy: 1.23.1
- pyTorch_debug: False
- pyTorch_version: 1.12.0
- pytorch-lightning: 1.7.0
- tqdm: 4.64.0
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.9.12
- version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000
Additional context
cc @awaelchli @morganmcg1 @borisdayma @scottire @manangoel99