-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workinghelp wantedOpen to be worked onOpen to be worked onpriority: 1Medium priority taskMedium priority task
Description
🐛 Bug
When using a CombinedLoader with the max_size_cycle option and DDP, all the GPUs get all validation data.
This bug is related to #7013 - however, the fix in PR #7102 only affect the default min_size option of the CombinedLoader
@tchaton ?
To Reproduce
Expected behavior
For the above repro, the validation data has length 8. I would expect that each of the 2 GPUs only get 4 batches each, but in fact they get 8 batches.
Environment
- CUDA:
- GPU:
- Tesla K80
- Tesla K80
- available: True
- version: 10.2 - Packages:
- numpy: 1.21.2
- pyTorch_debug: False
- pyTorch_version: 1.8.0
- pytorch-lightning: 1.5.0
- tqdm: 4.62.3 - System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.3
- version: 18.04.1-Ubuntu SMP Wed Jul 28 23:14:18 UTC 2021
Additional context
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workinghelp wantedOpen to be worked onOpen to be worked onpriority: 1Medium priority taskMedium priority task