Skip to content

loss=None and no logs when automatic_optimization=False #4204

@denadai2

Description

@denadai2

🐛 Bug

I think there is a bug when automatic_optimization=False. The loss=None (https://github.com/PyTorchLightning/pytorch-lightning/blob/72f19768c828b734d8565ffef7b78fb9a57ba847/pytorch_lightning/trainer/training_loop.py#L336) and this means that all the checkpoint_callbacks cannot work. There is no way to set the loss.

I also add that in the documentation (https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#manual-optimization) the training_step does not return anything. However, if it does not return anything, all the logs do not work because of: https://github.com/PyTorchLightning/pytorch-lightning/blob/72f19768c828b734d8565ffef7b78fb9a57ba847/pytorch_lightning/trainer/training_loop.py#L681.

Expected behavior

There should be a way to set the loss, and the behaviour when nothing is returned in training_step should be clear.

Environment

* CUDA:
        - GPU:
                - GeForce RTX 2080 Ti
                - GeForce RTX 2080 Ti
        - available:         True
        - version:           10.2
* Packages:
        - numpy:             1.19.1
        - pyTorch_debug:     False
        - pyTorch_version:   1.6.0
        - pytorch-lightning: 1.0.2
        - tqdm:              4.48.2
* System:
        - OS:                Linux
        - architecture:
                - 64bit
                - ELF
        - processor:         x86_64
        - python:            3.6.9
        - version:           #26-Ubuntu SMP Mon Jun 24 09:32:08 UTC 2019

Metadata

Metadata

Labels

bugSomething isn't workingdocsDocumentation relatedloggerRelated to the Loggers

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions