-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workinghelp wantedOpen to be worked onOpen to be worked onpriority: 1Medium priority taskMedium priority taskwon't fixThis will not be worked onThis will not be worked on
Description
🐛 Bug
After upgrading to pytorch-lightning 1.2.1, training with gradient_clip_val+manual_backward is broken. An error has occurred.
To Reproduce
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
import pytorch_lightning as pl
class Model(pl.LightningModule):
def __init__(self):
super().__init__()
self.automatic_optimization = False
self.l1 = torch.nn.Linear(32*32*3, 10)
def forward(self, x):
x = x.view(x.size(0), -1)
print(x.shape)
x = F.softmax(self.l1(x))
return x
def training_step(self, batch, batch_idx):
opt = self.optimizers()
opt.zero_grad()
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
self.manual_backward(loss, opt)
opt.step()
return self.l1(batch).sum()
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-3)
def main():
pl.seed_everything(42)
transform = transforms.Compose([
transforms.ToTensor(),
])
dataset = CIFAR10(".", train=True, download=True, transform=transform)
dataloader = DataLoader(dataset, batch_size=32, num_workers=2)
model = Model()
trainer_kwargs = {
'gradient_clip_val': 0.5,
}
trainer = pl.Trainer(**trainer_kwargs)
trainer.fit(model, dataloader)
if __name__ == '__main__':
main()Regards,
ayulockin and morningmoni
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workinghelp wantedOpen to be worked onOpen to be worked onpriority: 1Medium priority taskMedium priority taskwon't fixThis will not be worked onThis will not be worked on