Skip to content

5-Fold with PyTorchLightning + Wandb seems to log to the same experiment  #8614

@tchaton

Description

@tchaton

I am training 5-fold CV with PyTorch Lightning in a for loop. I am also logging all the results to wandb. I want wanbd to reinitalize the run after each fold, but it seems to continue with the same run and it logs all the results to the same run. I also tried passing kwargs in the WandbLogger as mentioned in the docs here, with no luck.
Here's a pseudo code of it:

def run(fold):
    kwargs = {
        "reinit": True,
        "group": f"{CFG['exp_name']}"
    }
    wandb_logger = WandbLogger(project='<name>', 
                        entity='<entity>', 
                        config = CFG,
                        name=f"fold_{fold}",
                        **kwargs
            )
    trainer = Trainer(
        precision=16,
        gpus=1,
        fast_dev_run=False,
        callbacks = [checkpoint_callback],
        logger=wandb_logger,
        progress_bar_refresh_rate=1,
        max_epochs=2,
        log_every_n_steps=1

    )

    trainer.fit(
        lit_model,
        data_module
    )

if __name__ == "__main__":
    for fold in range(5):
        run(fold)

Originally posted by @Gladiator07 in #8572

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions