-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workingwon't fixThis will not be worked onThis will not be worked on
Description
🐛 Bug
Currently, globally turning on cudnn benchmarking in torch (torch.backends.cudnn.benchmark = True) does nothing as it is overridden when constructing a Trainer object. However, it's reasonable for users to expect modification of torch.backends.cudnn.benchmark to be respected by PL.
More intuitive behaviour would be for PL to only modify the current value of torch.backends.cudnn.benchmark when it is explicitly set when constructing the Trainer via the benchmark arg.
To Reproduce
import torch
from pytorch_lightning import Trainer
torch.backends.cudnn.benchmark = True
trainer = Trainer(gpus=1)
print(torch.backends.cudnn.benchmark)Output:
False
# When it should be True
Expected behavior
For torch.backends.cudnn.benchmark to be changed by Trainer only when the benchmark arg is explicitly set.
| Instantiation | Behaviour |
|---|---|
Trainer() |
torch.backends.cudnn.benchmark is unchanged from current session value |
Trainer(benchmark=None) |
torch.backends.cudnn.benchmark is unchanged from current session value |
Trainer(benchmark=True) |
torch.backends.cudnn.benchmark set to True |
Trainer(benchmark=False) |
torch.backends.cudnn.benchmark set to False |
Environment
* CUDA:
- GPU:
- NVIDIA GeForce RTX 3070 Laptop GPU
- available: True
- version: 11.1
* Packages:
- numpy: 1.21.4
- pyTorch_debug: False
- pyTorch_version: 1.9.0+cu111
- pytorch-lightning: 1.5.3
- tqdm: 4.62.3
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.9.7
- version: #1 SMP Fri Apr 2 22:23:49 UTC 2021
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingwon't fixThis will not be worked onThis will not be worked on