-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
good first issueGood for newcomersGood for newcomersloggerRelated to the LoggersRelated to the Loggers
Description
I am training 5-fold CV with PyTorch Lightning in a for loop. I am also logging all the results to wandb. I want wanbd to reinitalize the run after each fold, but it seems to continue with the same run and it logs all the results to the same run. I also tried passing kwargs in the WandbLogger as mentioned in the docs here, with no luck.
Here's a pseudo code of it:
def run(fold):
kwargs = {
"reinit": True,
"group": f"{CFG['exp_name']}"
}
wandb_logger = WandbLogger(project='<name>',
entity='<entity>',
config = CFG,
name=f"fold_{fold}",
**kwargs
)
trainer = Trainer(
precision=16,
gpus=1,
fast_dev_run=False,
callbacks = [checkpoint_callback],
logger=wandb_logger,
progress_bar_refresh_rate=1,
max_epochs=2,
log_every_n_steps=1
)
trainer.fit(
lit_model,
data_module
)
if __name__ == "__main__":
for fold in range(5):
run(fold)Originally posted by @Gladiator07 in #8572
hktxt and iucario
Metadata
Metadata
Assignees
Labels
good first issueGood for newcomersGood for newcomersloggerRelated to the LoggersRelated to the Loggers