-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Description & Motivation
The feature to use multiple loggers is really helpful. But I have issues when I try to handle code in such a way that any logger maybe turned off at any time.
For example, when I'm writing this code, I am writing a model able to log into mlflow and tensorboard. When both loggers are passed to Trainer(logger=[ml_flow, tesnorboard_logger]) vs Trainer(logger=[tesnorboard_logger]), I have to write a lot of different custom code to handle that. Upstream handling of how many loggers were passed and of which type in which order is messy. Example, I want to log an image to tesnorboard logs when I only pass tensorboard logger will be: self.logger.experiment.add_image() but when multiple logs are passed and in unknown order, the code becomes unnecessarily convoluted.
Pitch
Handle multiple loggers in model easily. We should be able to check if logger of a type is present or not and if present, get them. This should happen with minimal friction such as checking all indexes. One solution may be to have self.loggers as a dictionary rather than list?
Alternatives
Providing __len__ for pytorch_lightning.loggers.base.LoggerCollection can be one low hanging fruit which does simplify a lot (of course doesnot go all the way).
Additional context
No response