What is your question?
For the default LR Range Test in PyTorch lightning, i.e., "lr_finder", is the reported loss curve based on training loss, test loss, or generalization loss? For me, it would be more reasonable to select the learning rate based on the test loss rather than training loss.
I noticed that there is a "val_dataloader" and "train_dataloader" argument in "lr_finder", but it is not clear what they are doing.

What's your environment?
- OS: iOS
- Packaging: pip
- Version: 0.7.6