Skip to content

When no val dataloader is present and user implements validation_step need to throw useful error  #508

@Ne0Ment

Description

@Ne0Ment

I have been using pytorch before, but for performance I decided to use lightning. I rewrote my pytorch into a pl.LightningModule class. While training for the first epoch, everything seems fine, but when it starts validation there is a TypeError: 'NoneType' object is not iterable.
Epoch 1: 100%|████████████████████████| 6514/6514 [01:27<00:00, 71.17batch/s, batch_nb=6513, gpu=0, loss=1.099, v_nb=7]
Validating: 0batch [00:00, ?batch/s]

train_dataset = t.utils.data.TensorDataset(X,y)
trainloader = t.utils.data.DataLoader(train_dataset, batch_size=64)

class FastNN(pl.LightningModule):

    def __init__(self, dataload):
        super(FastNN, self).__init__()
        self.fc1 = nn.Linear(110, 1024)
        self.fc2 = nn.Linear(1024, 512)
        self.fc3 = nn.Linear(512, 256)
        self.fc4 = nn.Linear(256, 128)
        self.fc5 = nn.Linear(128, 64)
        self.fc6 = nn.Linear(64, 32)
        self.fc7 = nn.Linear(32, 3)
        self.dataloader = dataload

    def forward(self, x):
        x = f.relu(self.fc1(x))
        x = f.relu(self.fc2(x))
        x = f.relu(self.fc3(x))
        x = f.relu(self.fc4(x))
        x = f.relu(self.fc5(x))
        x = f.relu(self.fc6(x))
        x = f.relu(self.fc7(x))
        return x

    def training_step(self, batch, batch_nb):
        # REQUIRED
        x, y = batch
        y_hat = self.forward(x)
        loss = t.nn.functional.cross_entropy(y_hat, y)
        tensorboard_logs = {'train_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

    def validation_step(self, batch, batch_nb):
        # OPTIONAL
        x, y = batch
        y_hat = self.forward(x)
        return {'val_loss': t.nn.functional.cross_entropy(y_hat, y)}

    def validation_end(self, outputs):
        # OPTIONAL
        avg_loss = t.stack([x['val_loss'] for x in outputs]).mean()
        tensorboard_logs = {'val_loss': avg_loss}
        return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}

    def configure_optimizers(self):
        # REQUIRED
        # can return multiple optimizers and learning_rate schedulers
        # (LBFGS it is automatically supported, no need for closure function)
        return t.optim.Adam(self.parameters(), lr=0.02)

    @pl.data_loader
    def train_dataloader(self):
        # REQUIRED
        return self.dataloader

net = FastNN(trainloader)
trainer = pl.Trainer(gpus=1)    
trainer.fit(net)

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureIs an improvement or enhancementhelp wantedOpen to be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions