Skip to content

Better message when DataLoader is wrong #1131

@stathius

Description

@stathius

On the verge between bug and improvement.

There was a bug in my Validation DataLoader and was returning irrelevant staff. Accidentally the length was 0. Probably an edge case combination. The error I was getting during the validation sanity check was quite cryptic:

Traceback (most recent call last):
  File "UNet_WaveProp.py", line 174, in <module>
    trainer.fit(model)
  File "/mnt/RDS/home/code/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 629, in fit
    self.run_pretrain_routine(model)
  File "/mnt/RDS/home/code/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 809, in run_pretrain_routine
    False)
  File "/mnt/RDS/home/code/pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py", line 300, in evaluate
    eval_results = model.validation_epoch_end(outputs)
  File "UNet_WaveProp.py", line 138, in validation_epoch_end
    avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
RuntimeError: stack expects a non-empty TensorList

I had to go through the code of pytorch-lightning for few hours to understand what was happening.
Maybe a more informative message would make more sense?

One thing would be to check if the DataLoader's size is 0.
What do you think? I could take a stab at a PR.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinglet's do it!approved to implement

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions