Skip to content

Self.log with multiple Optimizers errors out in 1.1.0 but works in 1.0.8 #5063

@blisc

Description

@blisc

🐛 Bug

self.log results in the following error:

  File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py", line 207, in auto_reduce_results_on_epoch_end
    opt_outputs = epoch_metrics[opt_idx]

when the following conditions are satisfied:

  • There are two optimizers
  • 1 of the optimizers does not log anything in 1 training epoch

Note, that this bug is not present in v1.0.8 and was introduced in 1.1.0

Please reproduce using the BoringModel and post here

https://colab.research.google.com/drive/1MkMmuzTmZU2hkPjoEQ8Mwd4qSh168QKn?usp=sharing

To Reproduce

See notebook

Expected behavior

The same behavious as 1.0.8.

Environment

See notebook.

Additional context

See notebook.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinghelp wantedOpen to be worked onloggingRelated to the `LoggerConnector` and `log()`priority: 0High priority task

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions