-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
docsDocumentation relatedDocumentation related
Description
📚 Documentation
Even though in the documentation, lightning very thoughtfully reminds readers of setting the random seed when using distributed data parallel. However, it does not mention where should we set them. Based on my experience on PyTorch, the seed can be set after calling the torch.distributed.init_process_group. For lightning, this call is handled by the trainer, so where should we set the random seed? One thought from me is that we set the seed in the __init__() of our LightningModule, any other suggestions? What would be the best practice for lightning?
Also, I believe that the seed_everything() function is called in the wrong place on the lightning's official example.
Metadata
Metadata
Assignees
Labels
docsDocumentation relatedDocumentation related