Skip to content

Invalid usage of torch.no_grad Context manager #5120

@8greg8

Description

@8greg8

🐛 Bug

Using no_grad context manager in the following line https://github.com/PyTorchLightning/pytorch-lightning/blob/127454ade2b851dd267b7f0b4d973bdefd0329e5/pytorch_lightning/utilities/distributed.py#L210
is incorrect as the context manager is callable. Parentheses are missing.

The line should be: with torch.no_grad():

Please reproduce using the BoringModel and post here

https://colab.research.google.com/drive/1snyzXx4G6QCatbs6bN2GCsTFIMdItvmm?usp=sharing

To Reproduce

Expected behavior

  • No AttributeError enter
  • gathered_loss == loss in BoringModel

Environment

Note: Bugs with code are solved faster ! Colab Notebook should be made public !

You can get the script and run it with:

wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
  • CUDA:
    • GPU:
      • Tesla P100-PCIE-16GB
    • available: True
    • version: 10.1
  • Packages:
    • numpy: 1.18.5
    • pyTorch_debug: True
    • pyTorch_version: 1.7.0+cu101
    • pytorch-lightning: 1.1.0
    • tqdm: 4.41.1
  • System:
    • OS: Linux
    • architecture:
      • 64bit
    • processor: x86_64
    • python: 3.6.9
    • version: Proposal for help #1 SMP Thu Jul 23 08:00:38 PDT 2020

Additional context

Training using "ddp" accelerator with arbitrary number of gpus.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingdistributedGeneric distributed-related topichelp wantedOpen to be worked on

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions