-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
🐛 Bug
Using no_grad context manager in the following line https://github.com/PyTorchLightning/pytorch-lightning/blob/127454ade2b851dd267b7f0b4d973bdefd0329e5/pytorch_lightning/utilities/distributed.py#L210
is incorrect as the context manager is callable. Parentheses are missing.
The line should be: with torch.no_grad():
Please reproduce using the BoringModel and post here
https://colab.research.google.com/drive/1snyzXx4G6QCatbs6bN2GCsTFIMdItvmm?usp=sharing
To Reproduce
Expected behavior
- No AttributeError enter
- gathered_loss == loss in BoringModel
Environment
Note: Bugs with code are solved faster ! Colab Notebook should be made public !
-
IDE: Please, use our python bug_report_model.py template. -
Colab Notebook: Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
- CUDA:
- GPU:
- Tesla P100-PCIE-16GB
- available: True
- version: 10.1
- GPU:
- Packages:
- numpy: 1.18.5
- pyTorch_debug: True
- pyTorch_version: 1.7.0+cu101
- pytorch-lightning: 1.1.0
- tqdm: 4.41.1
- System:
- OS: Linux
- architecture:
- 64bit
- processor: x86_64
- python: 3.6.9
- version: Proposal for help #1 SMP Thu Jul 23 08:00:38 PDT 2020
Additional context
Training using "ddp" accelerator with arbitrary number of gpus.