Skip to content

Tensorboard logging crashes the trainer #11103

@twaslowski

Description

@twaslowski

🐛 Bug

When trying to call trainer.fit() on a model, PyTorch Lightning attempts to log an empty hparams dict using Tensorboard. Down the call stack, this results tensorboard logging the following object:

{hp_metric:-1}

which results in the following error being thrown:

ValueError:
you tried to log -1 which is not currently supported. Try a dict or a scalar/tensor.

To Reproduce

I ran the boring model on my machine, as can be seen in the following gist:

https://gist.github.com/TobiasWaslowski/3c203ea6430e3a008703df6ff7437575

Expected behavior

I'm assuming that if the hparams are empty, they should just not get logged.

Environment

  • CUDA:
    • GPU:
    • available: False
    • version: None
  • Packages:
    • numpy: 1.21.4
    • pyTorch_debug: False
    • pyTorch_version: 1.10.0
    • pytorch-lightning: 1.5.5
    • tqdm: 4.62.3
  • System:
    • OS: Darwin
    • architecture:
      • 64bit
    • processor: i386
    • python: 3.8.5
    • version: Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions